Oct 14 13:14:41 crc systemd[1]: Starting Kubernetes Kubelet... Oct 14 13:14:41 crc restorecon[4661]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 13:14:41 crc restorecon[4661]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 13:14:42 crc restorecon[4661]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 14 13:14:42 crc restorecon[4661]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Oct 14 13:14:43 crc kubenswrapper[4725]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 14 13:14:43 crc kubenswrapper[4725]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Oct 14 13:14:43 crc kubenswrapper[4725]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 14 13:14:43 crc kubenswrapper[4725]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 14 13:14:43 crc kubenswrapper[4725]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 14 13:14:43 crc kubenswrapper[4725]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.652875 4725 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662689 4725 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662730 4725 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662737 4725 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662742 4725 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662748 4725 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662755 4725 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662761 4725 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662769 4725 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662777 4725 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662785 4725 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662791 4725 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662798 4725 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662804 4725 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662810 4725 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662815 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662822 4725 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662829 4725 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662835 4725 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662842 4725 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662848 4725 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662853 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662859 4725 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662864 4725 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662870 4725 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662879 4725 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662885 4725 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662899 4725 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662909 4725 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662917 4725 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662924 4725 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662938 4725 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662947 4725 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662966 4725 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662972 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662978 4725 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662984 4725 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662989 4725 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662994 4725 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.662999 4725 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.663005 4725 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.663012 4725 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.663019 4725 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.663026 4725 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.663035 4725 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.663043 4725 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.663050 4725 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.663055 4725 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.663061 4725 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.663067 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.663081 4725 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.663089 4725 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.663096 4725 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.663104 4725 feature_gate.go:330] unrecognized feature gate: Example Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.663110 4725 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.663117 4725 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.663123 4725 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.663131 4725 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.663139 4725 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.663147 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.663153 4725 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.663160 4725 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.663166 4725 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.663171 4725 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.663176 4725 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.663181 4725 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.663200 4725 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.663207 4725 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.663214 4725 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.663220 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.663227 4725 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.663234 4725 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663369 4725 flags.go:64] FLAG: --address="0.0.0.0" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663388 4725 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663405 4725 flags.go:64] FLAG: --anonymous-auth="true" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663414 4725 flags.go:64] FLAG: --application-metrics-count-limit="100" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663424 4725 flags.go:64] FLAG: --authentication-token-webhook="false" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663431 4725 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663440 4725 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663473 4725 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663482 4725 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663490 4725 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663499 4725 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663509 4725 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663518 4725 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663528 4725 flags.go:64] FLAG: --cgroup-root="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663537 4725 flags.go:64] FLAG: --cgroups-per-qos="true" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663546 4725 flags.go:64] FLAG: --client-ca-file="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663554 4725 flags.go:64] FLAG: --cloud-config="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663561 4725 flags.go:64] FLAG: --cloud-provider="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663569 4725 flags.go:64] FLAG: --cluster-dns="[]" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663578 4725 flags.go:64] FLAG: --cluster-domain="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663585 4725 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663592 4725 flags.go:64] FLAG: --config-dir="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663598 4725 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663605 4725 flags.go:64] FLAG: --container-log-max-files="5" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663621 4725 flags.go:64] FLAG: --container-log-max-size="10Mi" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663628 4725 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663634 4725 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663641 4725 flags.go:64] FLAG: --containerd-namespace="k8s.io" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663648 4725 flags.go:64] FLAG: --contention-profiling="false" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663654 4725 flags.go:64] FLAG: --cpu-cfs-quota="true" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663660 4725 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663667 4725 flags.go:64] FLAG: --cpu-manager-policy="none" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663673 4725 flags.go:64] FLAG: --cpu-manager-policy-options="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663682 4725 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663689 4725 flags.go:64] FLAG: --enable-controller-attach-detach="true" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663696 4725 flags.go:64] FLAG: --enable-debugging-handlers="true" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663702 4725 flags.go:64] FLAG: --enable-load-reader="false" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663709 4725 flags.go:64] FLAG: --enable-server="true" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663715 4725 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663724 4725 flags.go:64] FLAG: --event-burst="100" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663731 4725 flags.go:64] FLAG: --event-qps="50" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663737 4725 flags.go:64] FLAG: --event-storage-age-limit="default=0" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663743 4725 flags.go:64] FLAG: --event-storage-event-limit="default=0" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663749 4725 flags.go:64] FLAG: --eviction-hard="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663757 4725 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663763 4725 flags.go:64] FLAG: --eviction-minimum-reclaim="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663769 4725 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663777 4725 flags.go:64] FLAG: --eviction-soft="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663783 4725 flags.go:64] FLAG: --eviction-soft-grace-period="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663790 4725 flags.go:64] FLAG: --exit-on-lock-contention="false" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663795 4725 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663802 4725 flags.go:64] FLAG: --experimental-mounter-path="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663808 4725 flags.go:64] FLAG: --fail-cgroupv1="false" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663813 4725 flags.go:64] FLAG: --fail-swap-on="true" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663819 4725 flags.go:64] FLAG: --feature-gates="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663835 4725 flags.go:64] FLAG: --file-check-frequency="20s" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663841 4725 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663847 4725 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663854 4725 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663860 4725 flags.go:64] FLAG: --healthz-port="10248" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663867 4725 flags.go:64] FLAG: --help="false" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663874 4725 flags.go:64] FLAG: --hostname-override="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663882 4725 flags.go:64] FLAG: --housekeeping-interval="10s" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663891 4725 flags.go:64] FLAG: --http-check-frequency="20s" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663899 4725 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663907 4725 flags.go:64] FLAG: --image-credential-provider-config="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663915 4725 flags.go:64] FLAG: --image-gc-high-threshold="85" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663922 4725 flags.go:64] FLAG: --image-gc-low-threshold="80" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663930 4725 flags.go:64] FLAG: --image-service-endpoint="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663938 4725 flags.go:64] FLAG: --kernel-memcg-notification="false" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663946 4725 flags.go:64] FLAG: --kube-api-burst="100" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663953 4725 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663962 4725 flags.go:64] FLAG: --kube-api-qps="50" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663968 4725 flags.go:64] FLAG: --kube-reserved="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663974 4725 flags.go:64] FLAG: --kube-reserved-cgroup="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663981 4725 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663987 4725 flags.go:64] FLAG: --kubelet-cgroups="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663993 4725 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.663999 4725 flags.go:64] FLAG: --lock-file="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664005 4725 flags.go:64] FLAG: --log-cadvisor-usage="false" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664011 4725 flags.go:64] FLAG: --log-flush-frequency="5s" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664018 4725 flags.go:64] FLAG: --log-json-info-buffer-size="0" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664027 4725 flags.go:64] FLAG: --log-json-split-stream="false" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664041 4725 flags.go:64] FLAG: --log-text-info-buffer-size="0" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664048 4725 flags.go:64] FLAG: --log-text-split-stream="false" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664054 4725 flags.go:64] FLAG: --logging-format="text" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664061 4725 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664067 4725 flags.go:64] FLAG: --make-iptables-util-chains="true" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664073 4725 flags.go:64] FLAG: --manifest-url="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664079 4725 flags.go:64] FLAG: --manifest-url-header="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664094 4725 flags.go:64] FLAG: --max-housekeeping-interval="15s" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664101 4725 flags.go:64] FLAG: --max-open-files="1000000" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664109 4725 flags.go:64] FLAG: --max-pods="110" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664116 4725 flags.go:64] FLAG: --maximum-dead-containers="-1" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664122 4725 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664133 4725 flags.go:64] FLAG: --memory-manager-policy="None" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664139 4725 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664146 4725 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664152 4725 flags.go:64] FLAG: --node-ip="192.168.126.11" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664158 4725 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664174 4725 flags.go:64] FLAG: --node-status-max-images="50" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664181 4725 flags.go:64] FLAG: --node-status-update-frequency="10s" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664187 4725 flags.go:64] FLAG: --oom-score-adj="-999" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664193 4725 flags.go:64] FLAG: --pod-cidr="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664199 4725 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664208 4725 flags.go:64] FLAG: --pod-manifest-path="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664214 4725 flags.go:64] FLAG: --pod-max-pids="-1" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664220 4725 flags.go:64] FLAG: --pods-per-core="0" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664227 4725 flags.go:64] FLAG: --port="10250" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664233 4725 flags.go:64] FLAG: --protect-kernel-defaults="false" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664239 4725 flags.go:64] FLAG: --provider-id="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664245 4725 flags.go:64] FLAG: --qos-reserved="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664252 4725 flags.go:64] FLAG: --read-only-port="10255" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664258 4725 flags.go:64] FLAG: --register-node="true" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664264 4725 flags.go:64] FLAG: --register-schedulable="true" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664269 4725 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664281 4725 flags.go:64] FLAG: --registry-burst="10" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664287 4725 flags.go:64] FLAG: --registry-qps="5" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664293 4725 flags.go:64] FLAG: --reserved-cpus="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664301 4725 flags.go:64] FLAG: --reserved-memory="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664309 4725 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664316 4725 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664323 4725 flags.go:64] FLAG: --rotate-certificates="false" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664329 4725 flags.go:64] FLAG: --rotate-server-certificates="false" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664335 4725 flags.go:64] FLAG: --runonce="false" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664341 4725 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664347 4725 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664355 4725 flags.go:64] FLAG: --seccomp-default="false" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664361 4725 flags.go:64] FLAG: --serialize-image-pulls="true" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664367 4725 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664374 4725 flags.go:64] FLAG: --storage-driver-db="cadvisor" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664380 4725 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664387 4725 flags.go:64] FLAG: --storage-driver-password="root" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664400 4725 flags.go:64] FLAG: --storage-driver-secure="false" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664406 4725 flags.go:64] FLAG: --storage-driver-table="stats" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664413 4725 flags.go:64] FLAG: --storage-driver-user="root" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664419 4725 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664426 4725 flags.go:64] FLAG: --sync-frequency="1m0s" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664432 4725 flags.go:64] FLAG: --system-cgroups="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664439 4725 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664469 4725 flags.go:64] FLAG: --system-reserved-cgroup="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664476 4725 flags.go:64] FLAG: --tls-cert-file="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664482 4725 flags.go:64] FLAG: --tls-cipher-suites="[]" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664491 4725 flags.go:64] FLAG: --tls-min-version="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664499 4725 flags.go:64] FLAG: --tls-private-key-file="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664505 4725 flags.go:64] FLAG: --topology-manager-policy="none" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664512 4725 flags.go:64] FLAG: --topology-manager-policy-options="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664518 4725 flags.go:64] FLAG: --topology-manager-scope="container" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664525 4725 flags.go:64] FLAG: --v="2" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664534 4725 flags.go:64] FLAG: --version="false" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664542 4725 flags.go:64] FLAG: --vmodule="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664555 4725 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.664561 4725 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664751 4725 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664759 4725 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664767 4725 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664774 4725 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664781 4725 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664787 4725 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664795 4725 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664801 4725 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664807 4725 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664812 4725 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664818 4725 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664823 4725 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664830 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664837 4725 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664843 4725 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664849 4725 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664856 4725 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664861 4725 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664867 4725 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664873 4725 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664879 4725 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664887 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664894 4725 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664901 4725 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664909 4725 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664916 4725 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664922 4725 feature_gate.go:330] unrecognized feature gate: Example Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664927 4725 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664934 4725 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664940 4725 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664946 4725 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664952 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664958 4725 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664965 4725 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664972 4725 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664978 4725 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664985 4725 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664990 4725 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.664998 4725 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.665004 4725 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.665010 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.665015 4725 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.665021 4725 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.665027 4725 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.665032 4725 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.665038 4725 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.665044 4725 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.665049 4725 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.665054 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.665060 4725 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.665065 4725 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.665070 4725 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.665076 4725 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.665081 4725 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.665087 4725 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.665092 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.665098 4725 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.665104 4725 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.665109 4725 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.665115 4725 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.665121 4725 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.665126 4725 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.665132 4725 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.665137 4725 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.665143 4725 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.665148 4725 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.665154 4725 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.665160 4725 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.665165 4725 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.665171 4725 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.665177 4725 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.666141 4725 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.677381 4725 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.677443 4725 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677626 4725 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677642 4725 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677651 4725 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677662 4725 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677670 4725 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677678 4725 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677689 4725 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677700 4725 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677708 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677716 4725 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677724 4725 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677732 4725 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677739 4725 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677747 4725 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677754 4725 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677762 4725 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677770 4725 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677777 4725 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677785 4725 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677792 4725 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677800 4725 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677808 4725 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677815 4725 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677823 4725 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677830 4725 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677838 4725 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677846 4725 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677854 4725 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677861 4725 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677870 4725 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677878 4725 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677889 4725 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677896 4725 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677904 4725 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677912 4725 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677922 4725 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677933 4725 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677942 4725 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677951 4725 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677959 4725 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677966 4725 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677974 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677982 4725 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677989 4725 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.677997 4725 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678004 4725 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678013 4725 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678020 4725 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678031 4725 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678042 4725 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678050 4725 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678060 4725 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678069 4725 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678076 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678084 4725 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678093 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678100 4725 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678108 4725 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678116 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678124 4725 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678132 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678141 4725 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678149 4725 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678158 4725 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678166 4725 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678176 4725 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678186 4725 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678196 4725 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678204 4725 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678213 4725 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678221 4725 feature_gate.go:330] unrecognized feature gate: Example Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.678236 4725 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678484 4725 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678496 4725 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678504 4725 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678512 4725 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678520 4725 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678528 4725 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678538 4725 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678548 4725 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678558 4725 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678567 4725 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678578 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678591 4725 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678602 4725 feature_gate.go:330] unrecognized feature gate: Example Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678610 4725 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678618 4725 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678626 4725 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678635 4725 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678643 4725 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678651 4725 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678659 4725 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678667 4725 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678676 4725 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678683 4725 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678691 4725 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678699 4725 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678707 4725 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678715 4725 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678723 4725 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678730 4725 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678741 4725 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678749 4725 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678757 4725 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678764 4725 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678772 4725 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678780 4725 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678787 4725 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678795 4725 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678803 4725 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678811 4725 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678818 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678826 4725 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678835 4725 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678843 4725 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678850 4725 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678858 4725 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678866 4725 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678875 4725 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678883 4725 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678891 4725 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678899 4725 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678906 4725 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678915 4725 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678923 4725 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678930 4725 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678938 4725 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678945 4725 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678953 4725 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678962 4725 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678970 4725 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678977 4725 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678985 4725 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.678992 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.679002 4725 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.679013 4725 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.679023 4725 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.679037 4725 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.679045 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.679054 4725 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.679064 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.679072 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.679080 4725 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.679093 4725 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.679358 4725 server.go:940] "Client rotation is on, will bootstrap in background" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.685597 4725 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.685735 4725 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.687795 4725 server.go:997] "Starting client certificate rotation" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.687835 4725 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.688051 4725 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-02 14:39:14.402447272 +0000 UTC Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.688151 4725 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 1921h24m30.714301614s for next certificate rotation Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.719681 4725 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.727204 4725 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.751859 4725 log.go:25] "Validated CRI v1 runtime API" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.797311 4725 log.go:25] "Validated CRI v1 image API" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.801878 4725 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.810069 4725 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-10-14-13-10-10-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.810113 4725 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.832972 4725 manager.go:217] Machine: {Timestamp:2025-10-14 13:14:43.828867476 +0000 UTC m=+0.677302315 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:c40d671b-403d-4187-8320-a34d153a3ed0 BootID:d8471f97-cd84-4e08-baca-4ac91f02188a Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:2d:3c:e2 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:2d:3c:e2 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:d3:fc:fc Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:ce:a7:58 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:be:88:e5 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:d0:64:c2 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:1a:7d:40:57:03:01 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ee:51:ec:67:d7:04 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.833274 4725 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.833413 4725 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.833755 4725 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.833957 4725 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.834000 4725 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.834223 4725 topology_manager.go:138] "Creating topology manager with none policy" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.834237 4725 container_manager_linux.go:303] "Creating device plugin manager" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.834824 4725 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.834857 4725 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.835610 4725 state_mem.go:36] "Initialized new in-memory state store" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.835713 4725 server.go:1245] "Using root directory" path="/var/lib/kubelet" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.839024 4725 kubelet.go:418] "Attempting to sync node with API server" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.839052 4725 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.839070 4725 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.839084 4725 kubelet.go:324] "Adding apiserver pod source" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.839100 4725 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.845677 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Oct 14 13:14:43 crc kubenswrapper[4725]: E1014 13:14:43.845758 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.845744 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Oct 14 13:14:43 crc kubenswrapper[4725]: E1014 13:14:43.845848 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.845789 4725 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.848688 4725 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.856262 4725 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.860751 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.860834 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.860848 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.860862 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.860882 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.860894 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.860908 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.860927 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.860993 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.861006 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.861053 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.861067 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.862338 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.863329 4725 server.go:1280] "Started kubelet" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.863563 4725 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.864041 4725 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.864604 4725 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.864775 4725 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.865538 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.865569 4725 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.865745 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 22:46:04.156319374 +0000 UTC Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.865814 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 2217h31m20.290510398s for next certificate rotation Oct 14 13:14:43 crc kubenswrapper[4725]: E1014 13:14:43.865826 4725 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.866391 4725 volume_manager.go:287] "The desired_state_of_world populator starts" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.866401 4725 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 14 13:14:43 crc systemd[1]: Started Kubernetes Kubelet. Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.871623 4725 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 14 13:14:43 crc kubenswrapper[4725]: E1014 13:14:43.873326 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="200ms" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.873747 4725 factory.go:55] Registering systemd factory Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.873770 4725 factory.go:221] Registration of the systemd container factory successfully Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.873698 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Oct 14 13:14:43 crc kubenswrapper[4725]: E1014 13:14:43.873894 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.874203 4725 factory.go:153] Registering CRI-O factory Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.874240 4725 factory.go:221] Registration of the crio container factory successfully Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.874334 4725 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.874361 4725 factory.go:103] Registering Raw factory Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.874384 4725 manager.go:1196] Started watching for new ooms in manager Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.875384 4725 manager.go:319] Starting recovery of all containers Oct 14 13:14:43 crc kubenswrapper[4725]: E1014 13:14:43.874734 4725 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.219:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.186e5dd75a5498ea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-10-14 13:14:43.863288042 +0000 UTC m=+0.711722871,LastTimestamp:2025-10-14 13:14:43.863288042 +0000 UTC m=+0.711722871,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.879857 4725 server.go:460] "Adding debug handlers to kubelet server" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883371 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883444 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883474 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883488 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883501 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883514 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883527 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883539 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883554 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883594 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883608 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883620 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883632 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883646 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883656 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883667 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883680 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883691 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883702 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883713 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883724 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883735 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883747 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883758 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883772 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883786 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883799 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883813 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883825 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883837 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883850 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883889 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883905 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883936 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883947 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883959 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883971 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883981 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.883994 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.884028 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.884040 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.884052 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.884063 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.884075 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.884086 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.884097 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.884132 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.884728 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.884774 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.884788 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.884797 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.884809 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.884831 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.884843 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.884855 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.884866 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.884876 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.884889 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.884908 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.884926 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.884940 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.884952 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.884968 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885024 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885037 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885050 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885063 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885078 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885090 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885101 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885111 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885121 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885131 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885141 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885152 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885164 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885176 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885186 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885197 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885211 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885225 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885235 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885247 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885311 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885323 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885335 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885345 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885358 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885368 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885379 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885390 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885400 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885410 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885420 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885430 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885441 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885471 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885481 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885491 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885501 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885512 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885521 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885531 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885542 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885557 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885569 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885580 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885592 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885603 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885613 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885624 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885634 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885646 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885656 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885665 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885675 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885688 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885698 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885709 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885718 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885727 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885736 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885746 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.885758 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.887723 4725 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.887755 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.887777 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.887787 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.887796 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.887806 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.887818 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.887828 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.887840 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.887851 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.887862 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.887871 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.887882 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.887891 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.887905 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.887916 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.887927 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.887937 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.887949 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.887958 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.887968 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.887978 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.887988 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.887999 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888012 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888022 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888032 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888043 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888052 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888062 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888072 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888084 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888094 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888106 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888117 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888128 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888139 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888150 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888160 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888173 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888185 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888195 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888205 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888216 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888226 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888237 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888247 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888257 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888266 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888275 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888285 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888295 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888304 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888313 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888323 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888333 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888343 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888356 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888366 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888377 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888386 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888396 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888405 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888416 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888429 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888443 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888475 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888491 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888504 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888517 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888528 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888540 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888555 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888567 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888578 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888589 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888600 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888611 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888624 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888634 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888647 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888658 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888672 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888684 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888693 4725 reconstruct.go:97] "Volume reconstruction finished" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.888702 4725 reconciler.go:26] "Reconciler: start to sync state" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.905194 4725 manager.go:324] Recovery completed Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.916043 4725 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.918033 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.919790 4725 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.919834 4725 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.919866 4725 kubelet.go:2335] "Starting kubelet main sync loop" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.919895 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.919932 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.919945 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:43 crc kubenswrapper[4725]: E1014 13:14:43.919915 4725 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.920993 4725 cpu_manager.go:225] "Starting CPU manager" policy="none" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.921015 4725 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.921066 4725 state_mem.go:36] "Initialized new in-memory state store" Oct 14 13:14:43 crc kubenswrapper[4725]: W1014 13:14:43.921648 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Oct 14 13:14:43 crc kubenswrapper[4725]: E1014 13:14:43.921705 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.933885 4725 policy_none.go:49] "None policy: Start" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.935085 4725 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.935112 4725 state_mem.go:35] "Initializing new in-memory state store" Oct 14 13:14:43 crc kubenswrapper[4725]: E1014 13:14:43.967212 4725 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.985953 4725 manager.go:334] "Starting Device Plugin manager" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.986001 4725 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.986017 4725 server.go:79] "Starting device plugin registration server" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.986587 4725 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.986614 4725 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.986884 4725 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.986971 4725 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Oct 14 13:14:43 crc kubenswrapper[4725]: I1014 13:14:43.986985 4725 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 14 13:14:43 crc kubenswrapper[4725]: E1014 13:14:43.996733 4725 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.021038 4725 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.021131 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.022200 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.022235 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.022244 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.022420 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.023179 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.023201 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.023210 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.023895 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.023912 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.023986 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.024021 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.024060 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.024743 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.024779 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.024788 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.024850 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.024880 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.024895 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.024907 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.025084 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.025157 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.026216 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.026237 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.026247 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.026390 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.026409 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.026418 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.026593 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.026611 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.026619 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.026707 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.027076 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.027101 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.027757 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.027775 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.027782 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.027936 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.027954 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.027961 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.028118 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.028138 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.029176 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.029240 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.029253 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:44 crc kubenswrapper[4725]: E1014 13:14:44.074094 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="400ms" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.087100 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.088340 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.088372 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.088385 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.088409 4725 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 14 13:14:44 crc kubenswrapper[4725]: E1014 13:14:44.088868 4725 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.219:6443: connect: connection refused" node="crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.091075 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.091124 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.091156 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.091179 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.091242 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.091290 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.091317 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.091341 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.091363 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.091418 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.091466 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.091512 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.091545 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.091597 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.091645 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.192660 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.192746 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.192782 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.192823 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.192854 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.192884 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.192963 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.192999 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.193030 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.193061 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.193104 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.193160 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.193204 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.193245 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.193313 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.193683 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.193732 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.193798 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.193809 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.193744 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.193799 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.193843 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.193857 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.193893 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.193905 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.193901 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.193949 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.193938 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.193869 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.194029 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.289544 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.291225 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.291334 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.291363 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.291397 4725 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 14 13:14:44 crc kubenswrapper[4725]: E1014 13:14:44.292023 4725 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.219:6443: connect: connection refused" node="crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.357014 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.366282 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.380918 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.402081 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.408094 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 14 13:14:44 crc kubenswrapper[4725]: W1014 13:14:44.471297 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-8c65343379d353ebde9e9b58782a758a9af69b83a32ec3f99b4808d2953680d6 WatchSource:0}: Error finding container 8c65343379d353ebde9e9b58782a758a9af69b83a32ec3f99b4808d2953680d6: Status 404 returned error can't find the container with id 8c65343379d353ebde9e9b58782a758a9af69b83a32ec3f99b4808d2953680d6 Oct 14 13:14:44 crc kubenswrapper[4725]: W1014 13:14:44.475550 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-45a0398e081f5481ce65cb89da54c6f0b78eae818f89a1eef2d4ef16442414dd WatchSource:0}: Error finding container 45a0398e081f5481ce65cb89da54c6f0b78eae818f89a1eef2d4ef16442414dd: Status 404 returned error can't find the container with id 45a0398e081f5481ce65cb89da54c6f0b78eae818f89a1eef2d4ef16442414dd Oct 14 13:14:44 crc kubenswrapper[4725]: E1014 13:14:44.475792 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="800ms" Oct 14 13:14:44 crc kubenswrapper[4725]: W1014 13:14:44.476971 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-c9f104fe40acd677ad76496340351e1c6f82183cb683a348120dd69a987a364a WatchSource:0}: Error finding container c9f104fe40acd677ad76496340351e1c6f82183cb683a348120dd69a987a364a: Status 404 returned error can't find the container with id c9f104fe40acd677ad76496340351e1c6f82183cb683a348120dd69a987a364a Oct 14 13:14:44 crc kubenswrapper[4725]: W1014 13:14:44.480916 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-c34bae4ed64809a97d480d33906dc1e895a6ca42491e3133fe54ecf3dc3fd1f7 WatchSource:0}: Error finding container c34bae4ed64809a97d480d33906dc1e895a6ca42491e3133fe54ecf3dc3fd1f7: Status 404 returned error can't find the container with id c34bae4ed64809a97d480d33906dc1e895a6ca42491e3133fe54ecf3dc3fd1f7 Oct 14 13:14:44 crc kubenswrapper[4725]: W1014 13:14:44.482602 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-9f2ad6d74cbd20cd5d9c223b64144a6d36d335ce08c0f293d8373f8d4f170c7a WatchSource:0}: Error finding container 9f2ad6d74cbd20cd5d9c223b64144a6d36d335ce08c0f293d8373f8d4f170c7a: Status 404 returned error can't find the container with id 9f2ad6d74cbd20cd5d9c223b64144a6d36d335ce08c0f293d8373f8d4f170c7a Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.692554 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.694306 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.694361 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.694378 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.694414 4725 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 14 13:14:44 crc kubenswrapper[4725]: E1014 13:14:44.695140 4725 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.219:6443: connect: connection refused" node="crc" Oct 14 13:14:44 crc kubenswrapper[4725]: W1014 13:14:44.784691 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Oct 14 13:14:44 crc kubenswrapper[4725]: E1014 13:14:44.784861 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.864747 4725 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.924205 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c9f104fe40acd677ad76496340351e1c6f82183cb683a348120dd69a987a364a"} Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.926050 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8c65343379d353ebde9e9b58782a758a9af69b83a32ec3f99b4808d2953680d6"} Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.927518 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"45a0398e081f5481ce65cb89da54c6f0b78eae818f89a1eef2d4ef16442414dd"} Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.928799 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9f2ad6d74cbd20cd5d9c223b64144a6d36d335ce08c0f293d8373f8d4f170c7a"} Oct 14 13:14:44 crc kubenswrapper[4725]: I1014 13:14:44.930156 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c34bae4ed64809a97d480d33906dc1e895a6ca42491e3133fe54ecf3dc3fd1f7"} Oct 14 13:14:44 crc kubenswrapper[4725]: W1014 13:14:44.948157 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Oct 14 13:14:44 crc kubenswrapper[4725]: E1014 13:14:44.948286 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Oct 14 13:14:45 crc kubenswrapper[4725]: W1014 13:14:45.246531 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Oct 14 13:14:45 crc kubenswrapper[4725]: E1014 13:14:45.246653 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Oct 14 13:14:45 crc kubenswrapper[4725]: W1014 13:14:45.251083 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Oct 14 13:14:45 crc kubenswrapper[4725]: E1014 13:14:45.251178 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Oct 14 13:14:45 crc kubenswrapper[4725]: E1014 13:14:45.277593 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="1.6s" Oct 14 13:14:45 crc kubenswrapper[4725]: I1014 13:14:45.495601 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:45 crc kubenswrapper[4725]: I1014 13:14:45.497586 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:45 crc kubenswrapper[4725]: I1014 13:14:45.497641 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:45 crc kubenswrapper[4725]: I1014 13:14:45.497658 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:45 crc kubenswrapper[4725]: I1014 13:14:45.497694 4725 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 14 13:14:45 crc kubenswrapper[4725]: E1014 13:14:45.498310 4725 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.219:6443: connect: connection refused" node="crc" Oct 14 13:14:45 crc kubenswrapper[4725]: I1014 13:14:45.865135 4725 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Oct 14 13:14:46 crc kubenswrapper[4725]: W1014 13:14:46.523383 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Oct 14 13:14:46 crc kubenswrapper[4725]: E1014 13:14:46.523534 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Oct 14 13:14:46 crc kubenswrapper[4725]: I1014 13:14:46.865249 4725 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Oct 14 13:14:46 crc kubenswrapper[4725]: E1014 13:14:46.878651 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="3.2s" Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.098707 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.099872 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.099918 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.099932 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.099959 4725 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 14 13:14:47 crc kubenswrapper[4725]: E1014 13:14:47.100414 4725 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.219:6443: connect: connection refused" node="crc" Oct 14 13:14:47 crc kubenswrapper[4725]: W1014 13:14:47.686149 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Oct 14 13:14:47 crc kubenswrapper[4725]: E1014 13:14:47.686588 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Oct 14 13:14:47 crc kubenswrapper[4725]: W1014 13:14:47.815806 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Oct 14 13:14:47 crc kubenswrapper[4725]: E1014 13:14:47.815956 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.865278 4725 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.939399 4725 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f6837a60d32fc503ebad4b95d805c53789ec52328ec6c95b9ee89b8995e957f5" exitCode=0 Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.939498 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f6837a60d32fc503ebad4b95d805c53789ec52328ec6c95b9ee89b8995e957f5"} Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.939531 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.940864 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.940904 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.940918 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.941855 4725 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="43256276b9c25611f2e8388c421780adc2e4bc57cc94b55622ad09a4d4742f08" exitCode=0 Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.941904 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"43256276b9c25611f2e8388c421780adc2e4bc57cc94b55622ad09a4d4742f08"} Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.941959 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.942771 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.942802 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.942814 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.944263 4725 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68" exitCode=0 Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.944336 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68"} Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.944367 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.945274 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.945301 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.945312 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.947025 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.947216 4725 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="24408f31c736057fab41e161d394fe99f6d932020a796afc376013e2164aa209" exitCode=0 Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.947291 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.947292 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"24408f31c736057fab41e161d394fe99f6d932020a796afc376013e2164aa209"} Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.948222 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.948261 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.948275 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.949064 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.949095 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.949106 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.949544 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"92d8ba988206535f4ad53245b92484c9cfaf86d2a768d171b6a56a3d71ce2855"} Oct 14 13:14:47 crc kubenswrapper[4725]: I1014 13:14:47.949591 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c1c3fa61126ab1de3e82ff31cd2d54b72d6a4355db0dbe16a1300f5349ec032a"} Oct 14 13:14:48 crc kubenswrapper[4725]: W1014 13:14:48.367965 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Oct 14 13:14:48 crc kubenswrapper[4725]: E1014 13:14:48.368049 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Oct 14 13:14:48 crc kubenswrapper[4725]: I1014 13:14:48.864867 4725 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Oct 14 13:14:48 crc kubenswrapper[4725]: I1014 13:14:48.957148 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523"} Oct 14 13:14:48 crc kubenswrapper[4725]: I1014 13:14:48.957238 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864"} Oct 14 13:14:48 crc kubenswrapper[4725]: I1014 13:14:48.957253 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac"} Oct 14 13:14:48 crc kubenswrapper[4725]: I1014 13:14:48.957270 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad"} Oct 14 13:14:48 crc kubenswrapper[4725]: I1014 13:14:48.960625 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c401885a70b38cf87c5573364a8d07a79bd451eca6a6263abb37958614e9cf8d"} Oct 14 13:14:48 crc kubenswrapper[4725]: I1014 13:14:48.960664 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"03a26fe24f459eeb79dbe24b7b637389b9ba47f6880369e92a4c823d216d17b3"} Oct 14 13:14:48 crc kubenswrapper[4725]: I1014 13:14:48.960679 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"de7db8a1777567b9c9c186da537b33dd2a68202d45022e33ec691385e647ef5d"} Oct 14 13:14:48 crc kubenswrapper[4725]: I1014 13:14:48.960799 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:48 crc kubenswrapper[4725]: I1014 13:14:48.961812 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:48 crc kubenswrapper[4725]: I1014 13:14:48.961852 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:48 crc kubenswrapper[4725]: I1014 13:14:48.961865 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:48 crc kubenswrapper[4725]: I1014 13:14:48.964570 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"968caf419d5ddbb9213c7325a25516bdec5108f010a0e3457faa716b6f6f8955"} Oct 14 13:14:48 crc kubenswrapper[4725]: I1014 13:14:48.964617 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d76dc71de9a8769dc0878cc796ad53a4355d128d9d3011601d1e10127a390b87"} Oct 14 13:14:48 crc kubenswrapper[4725]: I1014 13:14:48.964719 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:48 crc kubenswrapper[4725]: I1014 13:14:48.965407 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:48 crc kubenswrapper[4725]: I1014 13:14:48.965438 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:48 crc kubenswrapper[4725]: I1014 13:14:48.965470 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:48 crc kubenswrapper[4725]: I1014 13:14:48.967368 4725 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7f40500763ccb1af96b766f07ee889156444a687c76ef2db4e3bcb7fb267f01f" exitCode=0 Oct 14 13:14:48 crc kubenswrapper[4725]: I1014 13:14:48.967431 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7f40500763ccb1af96b766f07ee889156444a687c76ef2db4e3bcb7fb267f01f"} Oct 14 13:14:48 crc kubenswrapper[4725]: I1014 13:14:48.967577 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:48 crc kubenswrapper[4725]: I1014 13:14:48.968344 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:48 crc kubenswrapper[4725]: I1014 13:14:48.968380 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:48 crc kubenswrapper[4725]: I1014 13:14:48.968395 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:48 crc kubenswrapper[4725]: I1014 13:14:48.972021 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"98123fb205ff3d4c97851b4bb515095342e86389ca41803f84a5a577388fa6cc"} Oct 14 13:14:48 crc kubenswrapper[4725]: I1014 13:14:48.972082 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:48 crc kubenswrapper[4725]: I1014 13:14:48.973034 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:48 crc kubenswrapper[4725]: I1014 13:14:48.973078 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:48 crc kubenswrapper[4725]: I1014 13:14:48.973092 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:49 crc kubenswrapper[4725]: I1014 13:14:49.978909 4725 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e661cefd3630af73c1e109b4da2ea190104b72ec74d3cb8de3e3088938e3c2d3" exitCode=0 Oct 14 13:14:49 crc kubenswrapper[4725]: I1014 13:14:49.978989 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e661cefd3630af73c1e109b4da2ea190104b72ec74d3cb8de3e3088938e3c2d3"} Oct 14 13:14:49 crc kubenswrapper[4725]: I1014 13:14:49.979044 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:49 crc kubenswrapper[4725]: I1014 13:14:49.980670 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:49 crc kubenswrapper[4725]: I1014 13:14:49.980736 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:49 crc kubenswrapper[4725]: I1014 13:14:49.980761 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:49 crc kubenswrapper[4725]: I1014 13:14:49.983865 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"54ba77f6cadef4a84c4cc13ac9e1a86d31f32655aa5ec68b1b948fed3cadf3e2"} Oct 14 13:14:49 crc kubenswrapper[4725]: I1014 13:14:49.983898 4725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 13:14:49 crc kubenswrapper[4725]: I1014 13:14:49.983941 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:49 crc kubenswrapper[4725]: I1014 13:14:49.983954 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:49 crc kubenswrapper[4725]: I1014 13:14:49.984042 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:49 crc kubenswrapper[4725]: I1014 13:14:49.984109 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:49 crc kubenswrapper[4725]: I1014 13:14:49.985715 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:49 crc kubenswrapper[4725]: I1014 13:14:49.985762 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:49 crc kubenswrapper[4725]: I1014 13:14:49.985792 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:49 crc kubenswrapper[4725]: I1014 13:14:49.988963 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:49 crc kubenswrapper[4725]: I1014 13:14:49.989008 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:49 crc kubenswrapper[4725]: I1014 13:14:49.989019 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:49 crc kubenswrapper[4725]: I1014 13:14:49.990003 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:49 crc kubenswrapper[4725]: I1014 13:14:49.990030 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:49 crc kubenswrapper[4725]: I1014 13:14:49.990040 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:49 crc kubenswrapper[4725]: I1014 13:14:49.990130 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:49 crc kubenswrapper[4725]: I1014 13:14:49.990193 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:49 crc kubenswrapper[4725]: I1014 13:14:49.990212 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:50 crc kubenswrapper[4725]: I1014 13:14:50.047883 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 13:14:50 crc kubenswrapper[4725]: I1014 13:14:50.300624 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:50 crc kubenswrapper[4725]: I1014 13:14:50.302177 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:50 crc kubenswrapper[4725]: I1014 13:14:50.302260 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:50 crc kubenswrapper[4725]: I1014 13:14:50.302295 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:50 crc kubenswrapper[4725]: I1014 13:14:50.302345 4725 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 14 13:14:50 crc kubenswrapper[4725]: I1014 13:14:50.991712 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9ca4c64c5fb322150d6e25d82e95f3d46ca9f2ed3d60f19727a7305c16275e12"} Oct 14 13:14:50 crc kubenswrapper[4725]: I1014 13:14:50.991809 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"334bf5067c2c3fe24cfc6f9bf08f93e0b1092d1cc999efcf2a90231f60a27b11"} Oct 14 13:14:50 crc kubenswrapper[4725]: I1014 13:14:50.991826 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"37a7f5344ebb6a787b7debb400b42aa13621fcd92b662aef3d29dbdfe7058f15"} Oct 14 13:14:50 crc kubenswrapper[4725]: I1014 13:14:50.991840 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ffd6d6e90161238a2071787c5c26756b4b07d83171fb3bacbd005be1e2d85d59"} Oct 14 13:14:50 crc kubenswrapper[4725]: I1014 13:14:50.991849 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:50 crc kubenswrapper[4725]: I1014 13:14:50.991889 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:50 crc kubenswrapper[4725]: I1014 13:14:50.992003 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 13:14:50 crc kubenswrapper[4725]: I1014 13:14:50.993098 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:50 crc kubenswrapper[4725]: I1014 13:14:50.993132 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:50 crc kubenswrapper[4725]: I1014 13:14:50.993142 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:50 crc kubenswrapper[4725]: I1014 13:14:50.993225 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:50 crc kubenswrapper[4725]: I1014 13:14:50.993258 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:50 crc kubenswrapper[4725]: I1014 13:14:50.993278 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:51 crc kubenswrapper[4725]: I1014 13:14:51.299572 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 13:14:51 crc kubenswrapper[4725]: I1014 13:14:51.299811 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:51 crc kubenswrapper[4725]: I1014 13:14:51.301350 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:51 crc kubenswrapper[4725]: I1014 13:14:51.301404 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:51 crc kubenswrapper[4725]: I1014 13:14:51.301417 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:52 crc kubenswrapper[4725]: I1014 13:14:52.002562 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:52 crc kubenswrapper[4725]: I1014 13:14:52.002534 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a139ff62e69e393f383c14174f219a7db0959ac559e699d4a9415eb1cc3a8c68"} Oct 14 13:14:52 crc kubenswrapper[4725]: I1014 13:14:52.002659 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:52 crc kubenswrapper[4725]: I1014 13:14:52.004271 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:52 crc kubenswrapper[4725]: I1014 13:14:52.004366 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:52 crc kubenswrapper[4725]: I1014 13:14:52.004396 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:52 crc kubenswrapper[4725]: I1014 13:14:52.004824 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:52 crc kubenswrapper[4725]: I1014 13:14:52.004886 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:52 crc kubenswrapper[4725]: I1014 13:14:52.004914 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:52 crc kubenswrapper[4725]: I1014 13:14:52.429522 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 13:14:53 crc kubenswrapper[4725]: I1014 13:14:53.005980 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:53 crc kubenswrapper[4725]: I1014 13:14:53.006101 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:53 crc kubenswrapper[4725]: I1014 13:14:53.010795 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:53 crc kubenswrapper[4725]: I1014 13:14:53.010843 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:53 crc kubenswrapper[4725]: I1014 13:14:53.010858 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:53 crc kubenswrapper[4725]: I1014 13:14:53.010802 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:53 crc kubenswrapper[4725]: I1014 13:14:53.010916 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:53 crc kubenswrapper[4725]: I1014 13:14:53.010958 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:53 crc kubenswrapper[4725]: E1014 13:14:53.996921 4725 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 14 13:14:54 crc kubenswrapper[4725]: I1014 13:14:54.176383 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 13:14:54 crc kubenswrapper[4725]: I1014 13:14:54.176723 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:54 crc kubenswrapper[4725]: I1014 13:14:54.178345 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:54 crc kubenswrapper[4725]: I1014 13:14:54.178411 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:54 crc kubenswrapper[4725]: I1014 13:14:54.178428 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:54 crc kubenswrapper[4725]: I1014 13:14:54.299754 4725 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 14 13:14:54 crc kubenswrapper[4725]: I1014 13:14:54.299880 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 13:14:54 crc kubenswrapper[4725]: I1014 13:14:54.568411 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 13:14:54 crc kubenswrapper[4725]: I1014 13:14:54.568715 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:54 crc kubenswrapper[4725]: I1014 13:14:54.570542 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:54 crc kubenswrapper[4725]: I1014 13:14:54.570588 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:54 crc kubenswrapper[4725]: I1014 13:14:54.570603 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:54 crc kubenswrapper[4725]: I1014 13:14:54.575273 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 13:14:55 crc kubenswrapper[4725]: I1014 13:14:55.010524 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:55 crc kubenswrapper[4725]: I1014 13:14:55.010652 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 13:14:55 crc kubenswrapper[4725]: I1014 13:14:55.011730 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:55 crc kubenswrapper[4725]: I1014 13:14:55.011801 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:55 crc kubenswrapper[4725]: I1014 13:14:55.011815 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:55 crc kubenswrapper[4725]: I1014 13:14:55.184714 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Oct 14 13:14:55 crc kubenswrapper[4725]: I1014 13:14:55.184985 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:55 crc kubenswrapper[4725]: I1014 13:14:55.186917 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:55 crc kubenswrapper[4725]: I1014 13:14:55.186977 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:55 crc kubenswrapper[4725]: I1014 13:14:55.186989 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:56 crc kubenswrapper[4725]: I1014 13:14:56.012163 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:56 crc kubenswrapper[4725]: I1014 13:14:56.013392 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:56 crc kubenswrapper[4725]: I1014 13:14:56.013428 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:56 crc kubenswrapper[4725]: I1014 13:14:56.013439 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:58 crc kubenswrapper[4725]: I1014 13:14:58.630570 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 13:14:58 crc kubenswrapper[4725]: I1014 13:14:58.630803 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:58 crc kubenswrapper[4725]: I1014 13:14:58.632403 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:58 crc kubenswrapper[4725]: I1014 13:14:58.632491 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:58 crc kubenswrapper[4725]: I1014 13:14:58.632502 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:58 crc kubenswrapper[4725]: I1014 13:14:58.635550 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 13:14:59 crc kubenswrapper[4725]: I1014 13:14:59.019147 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:14:59 crc kubenswrapper[4725]: I1014 13:14:59.020089 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:14:59 crc kubenswrapper[4725]: I1014 13:14:59.020165 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:14:59 crc kubenswrapper[4725]: I1014 13:14:59.020189 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:14:59 crc kubenswrapper[4725]: I1014 13:14:59.795025 4725 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55272->192.168.126.11:17697: read: connection reset by peer" start-of-body= Oct 14 13:14:59 crc kubenswrapper[4725]: I1014 13:14:59.795117 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55272->192.168.126.11:17697: read: connection reset by peer" Oct 14 13:14:59 crc kubenswrapper[4725]: I1014 13:14:59.865854 4725 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Oct 14 13:14:59 crc kubenswrapper[4725]: I1014 13:14:59.905731 4725 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Oct 14 13:14:59 crc kubenswrapper[4725]: I1014 13:14:59.906034 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Oct 14 13:15:00 crc kubenswrapper[4725]: I1014 13:15:00.024325 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 14 13:15:00 crc kubenswrapper[4725]: I1014 13:15:00.026552 4725 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="54ba77f6cadef4a84c4cc13ac9e1a86d31f32655aa5ec68b1b948fed3cadf3e2" exitCode=255 Oct 14 13:15:00 crc kubenswrapper[4725]: I1014 13:15:00.026637 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"54ba77f6cadef4a84c4cc13ac9e1a86d31f32655aa5ec68b1b948fed3cadf3e2"} Oct 14 13:15:00 crc kubenswrapper[4725]: I1014 13:15:00.026956 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:15:00 crc kubenswrapper[4725]: I1014 13:15:00.027900 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:00 crc kubenswrapper[4725]: I1014 13:15:00.027942 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:00 crc kubenswrapper[4725]: I1014 13:15:00.027954 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:00 crc kubenswrapper[4725]: I1014 13:15:00.028504 4725 scope.go:117] "RemoveContainer" containerID="54ba77f6cadef4a84c4cc13ac9e1a86d31f32655aa5ec68b1b948fed3cadf3e2" Oct 14 13:15:00 crc kubenswrapper[4725]: E1014 13:15:00.080277 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Oct 14 13:15:00 crc kubenswrapper[4725]: I1014 13:15:00.259094 4725 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Oct 14 13:15:00 crc kubenswrapper[4725]: I1014 13:15:00.259174 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 14 13:15:00 crc kubenswrapper[4725]: I1014 13:15:00.263092 4725 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Oct 14 13:15:00 crc kubenswrapper[4725]: I1014 13:15:00.263174 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 14 13:15:00 crc kubenswrapper[4725]: I1014 13:15:00.268311 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Oct 14 13:15:00 crc kubenswrapper[4725]: I1014 13:15:00.268554 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:15:00 crc kubenswrapper[4725]: I1014 13:15:00.269752 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:00 crc kubenswrapper[4725]: I1014 13:15:00.269796 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:00 crc kubenswrapper[4725]: I1014 13:15:00.269807 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:00 crc kubenswrapper[4725]: I1014 13:15:00.329607 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Oct 14 13:15:01 crc kubenswrapper[4725]: I1014 13:15:01.030224 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 14 13:15:01 crc kubenswrapper[4725]: I1014 13:15:01.051877 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987"} Oct 14 13:15:01 crc kubenswrapper[4725]: I1014 13:15:01.051966 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:15:01 crc kubenswrapper[4725]: I1014 13:15:01.052214 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:15:01 crc kubenswrapper[4725]: I1014 13:15:01.053712 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:01 crc kubenswrapper[4725]: I1014 13:15:01.053746 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:01 crc kubenswrapper[4725]: I1014 13:15:01.053761 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:01 crc kubenswrapper[4725]: I1014 13:15:01.053712 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:01 crc kubenswrapper[4725]: I1014 13:15:01.053817 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:01 crc kubenswrapper[4725]: I1014 13:15:01.053837 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:01 crc kubenswrapper[4725]: I1014 13:15:01.067830 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Oct 14 13:15:02 crc kubenswrapper[4725]: I1014 13:15:02.054746 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:15:02 crc kubenswrapper[4725]: I1014 13:15:02.056842 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:02 crc kubenswrapper[4725]: I1014 13:15:02.056879 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:02 crc kubenswrapper[4725]: I1014 13:15:02.056889 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:03 crc kubenswrapper[4725]: E1014 13:15:03.997089 4725 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 14 13:15:04 crc kubenswrapper[4725]: I1014 13:15:04.181264 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 13:15:04 crc kubenswrapper[4725]: I1014 13:15:04.181478 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:15:04 crc kubenswrapper[4725]: I1014 13:15:04.181482 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 13:15:04 crc kubenswrapper[4725]: I1014 13:15:04.182743 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:04 crc kubenswrapper[4725]: I1014 13:15:04.182792 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:04 crc kubenswrapper[4725]: I1014 13:15:04.182802 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:04 crc kubenswrapper[4725]: I1014 13:15:04.187487 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 13:15:04 crc kubenswrapper[4725]: I1014 13:15:04.301104 4725 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 14 13:15:04 crc kubenswrapper[4725]: I1014 13:15:04.301176 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.062813 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.063847 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.063897 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.063910 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:05 crc kubenswrapper[4725]: E1014 13:15:05.237202 4725 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.239529 4725 trace.go:236] Trace[1656413040]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Oct-2025 13:14:53.644) (total time: 11594ms): Oct 14 13:15:05 crc kubenswrapper[4725]: Trace[1656413040]: ---"Objects listed" error: 11594ms (13:15:05.239) Oct 14 13:15:05 crc kubenswrapper[4725]: Trace[1656413040]: [11.59477939s] [11.59477939s] END Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.239556 4725 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.241971 4725 trace.go:236] Trace[398076273]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Oct-2025 13:14:51.914) (total time: 13326ms): Oct 14 13:15:05 crc kubenswrapper[4725]: Trace[398076273]: ---"Objects listed" error: 13326ms (13:15:05.241) Oct 14 13:15:05 crc kubenswrapper[4725]: Trace[398076273]: [13.326899613s] [13.326899613s] END Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.242039 4725 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.242348 4725 trace.go:236] Trace[782968632]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Oct-2025 13:14:52.505) (total time: 12736ms): Oct 14 13:15:05 crc kubenswrapper[4725]: Trace[782968632]: ---"Objects listed" error: 12736ms (13:15:05.242) Oct 14 13:15:05 crc kubenswrapper[4725]: Trace[782968632]: [12.736666642s] [12.736666642s] END Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.242488 4725 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.242688 4725 trace.go:236] Trace[3776460]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Oct-2025 13:14:52.533) (total time: 12708ms): Oct 14 13:15:05 crc kubenswrapper[4725]: Trace[3776460]: ---"Objects listed" error: 12708ms (13:15:05.242) Oct 14 13:15:05 crc kubenswrapper[4725]: Trace[3776460]: [12.708883036s] [12.708883036s] END Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.242718 4725 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.244713 4725 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.851801 4725 apiserver.go:52] "Watching apiserver" Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.921220 4725 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.921527 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-n9mfx","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.921937 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.922045 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.922094 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:05 crc kubenswrapper[4725]: E1014 13:15:05.922168 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:15:05 crc kubenswrapper[4725]: E1014 13:15:05.922336 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.922351 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.922624 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:05 crc kubenswrapper[4725]: E1014 13:15:05.922923 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.922370 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.922676 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-n9mfx" Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.923969 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.924074 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.924958 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.925275 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.925837 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.926003 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.926414 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.927685 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.927924 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.927978 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.928151 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.928150 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.950798 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.971032 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.972382 4725 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.986280 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:05 crc kubenswrapper[4725]: I1014 13:15:05.994664 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.003620 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.013506 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.022306 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.030967 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.041782 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048329 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048371 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048389 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048407 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048422 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048437 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048466 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048481 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048495 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048510 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048527 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048542 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048563 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048578 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048593 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048607 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048623 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048641 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048656 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048671 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048688 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048702 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048716 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048729 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048743 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048758 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048772 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048788 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048806 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048823 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048838 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048853 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048882 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048901 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048926 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048948 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048969 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.048994 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049009 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049024 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049041 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049058 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049073 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049092 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049118 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049134 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049149 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049164 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049182 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049197 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049213 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049229 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049246 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049262 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049278 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049304 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049331 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049347 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049368 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049383 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049402 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049417 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049432 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049463 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049503 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049521 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049536 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049551 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049570 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049586 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049626 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049641 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049664 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049679 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049694 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049709 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049726 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049742 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049759 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049773 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049789 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049805 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049823 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049843 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049868 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049888 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049905 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049927 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049944 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049960 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049977 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.049993 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050009 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050025 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050039 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050056 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050071 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050096 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050113 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050127 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050144 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050159 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050178 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050194 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050210 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050227 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050242 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050262 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050283 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050304 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050323 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050343 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050365 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050385 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050405 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050420 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050439 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050473 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050492 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050512 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050528 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050545 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050563 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050580 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050601 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050623 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050644 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050666 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050686 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050708 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050729 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050750 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050773 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050794 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050819 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050842 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050885 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050902 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050919 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050934 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050950 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050966 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050982 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.050998 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051014 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051033 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051050 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051066 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051083 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051102 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051119 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051135 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051151 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051170 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051186 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051202 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051220 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051236 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051253 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051270 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051286 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051302 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051323 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051340 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051358 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051381 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051402 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051424 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051553 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051579 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051601 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051623 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051645 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051669 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051690 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051713 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051735 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051755 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051776 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051796 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051818 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051838 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051857 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051878 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051898 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051920 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051942 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051966 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.051990 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.052013 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.052033 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.052054 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.052079 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.052104 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.052129 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.052183 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.052218 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/768af815-9351-4b60-a9de-9f188049acd8-hosts-file\") pod \"node-resolver-n9mfx\" (UID: \"768af815-9351-4b60-a9de-9f188049acd8\") " pod="openshift-dns/node-resolver-n9mfx" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.052246 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.052271 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.052294 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.052317 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.052338 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn8zz\" (UniqueName: \"kubernetes.io/projected/768af815-9351-4b60-a9de-9f188049acd8-kube-api-access-hn8zz\") pod \"node-resolver-n9mfx\" (UID: \"768af815-9351-4b60-a9de-9f188049acd8\") " pod="openshift-dns/node-resolver-n9mfx" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.052362 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.052385 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.052411 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.052432 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.052469 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.052497 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.052516 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.052541 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.052557 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.053512 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.053522 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.053779 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.054012 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.054174 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.054245 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.054276 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.054379 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.054477 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.054528 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.054604 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.054666 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.054755 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.054872 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.055238 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.055269 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.055523 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.055585 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.055711 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.055733 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.055986 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.056071 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.056302 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.056320 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.056647 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.057253 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.057793 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.057829 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.058168 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.058361 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.058389 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.058621 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.058634 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.058835 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.058850 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.058919 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.059079 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.059394 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.062061 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.062324 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.062566 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.062770 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.062972 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.063189 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.063648 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.064034 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.064264 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.065048 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.065786 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.065865 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.066277 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.066475 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.067040 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.067123 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.067999 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.068054 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.068410 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.068643 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.069133 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.069207 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.069410 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.069261 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.069438 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.069490 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.070222 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.070272 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.070487 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.070837 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.070920 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.071108 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.071610 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.071937 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.072185 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.072213 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.072379 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.072598 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.072974 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.073006 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.073023 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.073358 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.073812 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.073830 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.074350 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.074562 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.074670 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.075078 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.075294 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.075789 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.076026 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.076436 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.076550 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: E1014 13:15:06.077937 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:15:06.577915196 +0000 UTC m=+23.426349995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.078078 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: E1014 13:15:06.078580 4725 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 13:15:06 crc kubenswrapper[4725]: E1014 13:15:06.078650 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 13:15:06.578640555 +0000 UTC m=+23.427075354 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.078808 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.079718 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.079842 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 13:15:06 crc kubenswrapper[4725]: E1014 13:15:06.079890 4725 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 13:15:06 crc kubenswrapper[4725]: E1014 13:15:06.079918 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 13:15:06.579910219 +0000 UTC m=+23.428345028 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.082686 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.081478 4725 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.084592 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.084863 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.085660 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.085846 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.085865 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.086372 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.088517 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.088739 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.088897 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.089124 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.089222 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.089173 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.089381 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.089638 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.093488 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.093710 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: E1014 13:15:06.093892 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 13:15:06 crc kubenswrapper[4725]: E1014 13:15:06.093914 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 13:15:06 crc kubenswrapper[4725]: E1014 13:15:06.093926 4725 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:15:06 crc kubenswrapper[4725]: E1014 13:15:06.093990 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-14 13:15:06.593970552 +0000 UTC m=+23.442405361 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:15:06 crc kubenswrapper[4725]: E1014 13:15:06.094386 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 13:15:06 crc kubenswrapper[4725]: E1014 13:15:06.094406 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 13:15:06 crc kubenswrapper[4725]: E1014 13:15:06.094419 4725 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:15:06 crc kubenswrapper[4725]: E1014 13:15:06.094484 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-14 13:15:06.594467485 +0000 UTC m=+23.442902294 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.094947 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.095348 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.095637 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.095796 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.095943 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.095964 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.096045 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.096273 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.096417 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.097846 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.097884 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.097809 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.098140 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.098634 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.099351 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.099950 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.100127 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.102196 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.102266 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.102519 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.102566 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.102631 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.102724 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.102886 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.103425 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.103595 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.103895 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.103962 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.104061 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.104069 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.104279 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.104598 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.104708 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.104759 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.104816 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.104865 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.104911 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.104919 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.105043 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.105591 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.105678 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.106901 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.106964 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.107774 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.107921 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.108239 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.108354 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.108402 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.109094 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.109103 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.109070 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.109218 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.109473 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.109653 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.109688 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.109728 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.109869 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.110038 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.110490 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.110596 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.110632 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.110753 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.108422 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.108601 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.113034 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.113256 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.145605 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.145630 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.145793 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.146267 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.146595 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.146839 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.147234 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.149299 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.149665 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.154298 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.154501 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.154688 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn8zz\" (UniqueName: \"kubernetes.io/projected/768af815-9351-4b60-a9de-9f188049acd8-kube-api-access-hn8zz\") pod \"node-resolver-n9mfx\" (UID: \"768af815-9351-4b60-a9de-9f188049acd8\") " pod="openshift-dns/node-resolver-n9mfx" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.154842 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.154931 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/768af815-9351-4b60-a9de-9f188049acd8-hosts-file\") pod \"node-resolver-n9mfx\" (UID: \"768af815-9351-4b60-a9de-9f188049acd8\") " pod="openshift-dns/node-resolver-n9mfx" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.155091 4725 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.155165 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.155228 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.155295 4725 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.155385 4725 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.155509 4725 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.155595 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.155672 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.155758 4725 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.155847 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.155854 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.155926 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.155943 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.155959 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.155981 4725 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.155996 4725 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156010 4725 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156024 4725 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156042 4725 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156056 4725 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156071 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156084 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156103 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156117 4725 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156134 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156152 4725 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156166 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156179 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156192 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156210 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156225 4725 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156239 4725 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156252 4725 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156271 4725 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156285 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156299 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156313 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156330 4725 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156343 4725 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156356 4725 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156373 4725 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156386 4725 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156399 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156413 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156430 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156446 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156483 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156495 4725 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156514 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156528 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156539 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156555 4725 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156567 4725 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156579 4725 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156591 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156607 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156622 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156634 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156647 4725 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156665 4725 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156678 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156690 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156703 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156719 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156734 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156749 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156766 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156778 4725 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156792 4725 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156804 4725 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156821 4725 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156833 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156846 4725 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156858 4725 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156875 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156890 4725 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156903 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156917 4725 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156934 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156948 4725 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156962 4725 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156980 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.156994 4725 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157007 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157020 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157037 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157051 4725 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157064 4725 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157078 4725 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157095 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157107 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157121 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157138 4725 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157151 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157180 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157195 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157213 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157227 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157241 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157257 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157276 4725 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157288 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157301 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157313 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157330 4725 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157344 4725 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157356 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157372 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157385 4725 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157397 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157410 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157430 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157444 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157474 4725 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157487 4725 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157505 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157519 4725 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157534 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157577 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157591 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.155796 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/768af815-9351-4b60-a9de-9f188049acd8-hosts-file\") pod \"node-resolver-n9mfx\" (UID: \"768af815-9351-4b60-a9de-9f188049acd8\") " pod="openshift-dns/node-resolver-n9mfx" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157605 4725 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157874 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157907 4725 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157955 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.157983 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.158024 4725 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.158040 4725 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.158057 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.158128 4725 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.158148 4725 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.158163 4725 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.158190 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.158204 4725 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.158219 4725 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.158233 4725 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.158255 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.158269 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.158282 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.158293 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.158306 4725 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.158316 4725 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.158326 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.158339 4725 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.158355 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.158368 4725 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.158383 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.158397 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.158407 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.158419 4725 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.158431 4725 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.159226 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.159303 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.159315 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.159327 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.159341 4725 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.159354 4725 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.159363 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.159373 4725 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.159385 4725 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.159395 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.159404 4725 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.159416 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.159427 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.159439 4725 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.159485 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.159499 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.159509 4725 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.159520 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.159532 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.159580 4725 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.159590 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.159603 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.159612 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.159656 4725 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.159672 4725 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.159685 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.159700 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.159734 4725 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.159745 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.159758 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.163324 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.163860 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.168207 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.168628 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.169444 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.169614 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.170979 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.171181 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.171282 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.171365 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.172436 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.182168 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.182611 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.183512 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.187481 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.222414 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-t9hh9"] Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.222829 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.223731 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-kbgwl"] Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.224296 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.228239 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn8zz\" (UniqueName: \"kubernetes.io/projected/768af815-9351-4b60-a9de-9f188049acd8-kube-api-access-hn8zz\") pod \"node-resolver-n9mfx\" (UID: \"768af815-9351-4b60-a9de-9f188049acd8\") " pod="openshift-dns/node-resolver-n9mfx" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.228414 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.228467 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.228634 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.228762 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.228794 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.228655 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.229128 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.229223 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-l7nwj"] Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.229545 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.229749 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.229953 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.230022 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.232223 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.232882 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.237396 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.242358 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.248879 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.248926 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.260667 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.260693 4725 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.260702 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.260712 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.260721 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.260730 4725 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.260740 4725 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.260749 4725 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.260759 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.260768 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.260777 4725 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.260785 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.260793 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.260802 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.260810 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.264317 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.269786 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-n9mfx" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.271531 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: W1014 13:15:06.279222 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-4ef3cee01d77c8f44b6847b27e1e9b33742703ef6239085de928c373a81faf61 WatchSource:0}: Error finding container 4ef3cee01d77c8f44b6847b27e1e9b33742703ef6239085de928c373a81faf61: Status 404 returned error can't find the container with id 4ef3cee01d77c8f44b6847b27e1e9b33742703ef6239085de928c373a81faf61 Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.286741 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.304388 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ba77f6cadef4a84c4cc13ac9e1a86d31f32655aa5ec68b1b948fed3cadf3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:14:59Z\\\",\\\"message\\\":\\\"W1014 13:14:49.096766 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1014 13:14:49.097281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1760447689 cert, and key in /tmp/serving-cert-1900757213/serving-signer.crt, /tmp/serving-cert-1900757213/serving-signer.key\\\\nI1014 13:14:49.460858 1 observer_polling.go:159] Starting file observer\\\\nW1014 13:14:49.464193 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 13:14:49.464428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:14:49.465360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1900757213/tls.crt::/tmp/serving-cert-1900757213/tls.key\\\\\\\"\\\\nF1014 13:14:59.791051 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.326898 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.348510 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.361139 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d4ed727c-f4d1-47cd-a218-e22803eb1750-cni-binary-copy\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.361175 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f27c973f-d487-4b38-8921-f9c96635219e-system-cni-dir\") pod \"multus-additional-cni-plugins-l7nwj\" (UID: \"f27c973f-d487-4b38-8921-f9c96635219e\") " pod="openshift-multus/multus-additional-cni-plugins-l7nwj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.361195 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c-proxy-tls\") pod \"machine-config-daemon-t9hh9\" (UID: \"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\") " pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.361210 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-host-run-k8s-cni-cncf-io\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.361229 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-host-run-multus-certs\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.361243 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f27c973f-d487-4b38-8921-f9c96635219e-cnibin\") pod \"multus-additional-cni-plugins-l7nwj\" (UID: \"f27c973f-d487-4b38-8921-f9c96635219e\") " pod="openshift-multus/multus-additional-cni-plugins-l7nwj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.361261 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f27c973f-d487-4b38-8921-f9c96635219e-os-release\") pod \"multus-additional-cni-plugins-l7nwj\" (UID: \"f27c973f-d487-4b38-8921-f9c96635219e\") " pod="openshift-multus/multus-additional-cni-plugins-l7nwj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.361285 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f27c973f-d487-4b38-8921-f9c96635219e-cni-binary-copy\") pod \"multus-additional-cni-plugins-l7nwj\" (UID: \"f27c973f-d487-4b38-8921-f9c96635219e\") " pod="openshift-multus/multus-additional-cni-plugins-l7nwj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.361308 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f27c973f-d487-4b38-8921-f9c96635219e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-l7nwj\" (UID: \"f27c973f-d487-4b38-8921-f9c96635219e\") " pod="openshift-multus/multus-additional-cni-plugins-l7nwj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.361322 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-etc-kubernetes\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.361342 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-host-var-lib-cni-bin\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.361358 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-host-var-lib-cni-multus\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.361379 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-host-var-lib-kubelet\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.361394 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c-mcd-auth-proxy-config\") pod \"machine-config-daemon-t9hh9\" (UID: \"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\") " pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.361409 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-cnibin\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.361422 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-host-run-netns\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.361438 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d4ed727c-f4d1-47cd-a218-e22803eb1750-multus-daemon-config\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.361490 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c-rootfs\") pod \"machine-config-daemon-t9hh9\" (UID: \"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\") " pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.361509 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcldt\" (UniqueName: \"kubernetes.io/projected/ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c-kube-api-access-hcldt\") pod \"machine-config-daemon-t9hh9\" (UID: \"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\") " pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.361524 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-system-cni-dir\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.361593 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-multus-socket-dir-parent\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.361616 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-multus-cni-dir\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.361632 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v45p2\" (UniqueName: \"kubernetes.io/projected/f27c973f-d487-4b38-8921-f9c96635219e-kube-api-access-v45p2\") pod \"multus-additional-cni-plugins-l7nwj\" (UID: \"f27c973f-d487-4b38-8921-f9c96635219e\") " pod="openshift-multus/multus-additional-cni-plugins-l7nwj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.361649 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-hostroot\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.361668 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn97f\" (UniqueName: \"kubernetes.io/projected/d4ed727c-f4d1-47cd-a218-e22803eb1750-kube-api-access-gn97f\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.361704 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-os-release\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.361720 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-multus-conf-dir\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.361735 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f27c973f-d487-4b38-8921-f9c96635219e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-l7nwj\" (UID: \"f27c973f-d487-4b38-8921-f9c96635219e\") " pod="openshift-multus/multus-additional-cni-plugins-l7nwj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.361739 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.433533 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.462364 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-os-release\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.462413 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-multus-conf-dir\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.462436 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn97f\" (UniqueName: \"kubernetes.io/projected/d4ed727c-f4d1-47cd-a218-e22803eb1750-kube-api-access-gn97f\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.462478 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f27c973f-d487-4b38-8921-f9c96635219e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-l7nwj\" (UID: \"f27c973f-d487-4b38-8921-f9c96635219e\") " pod="openshift-multus/multus-additional-cni-plugins-l7nwj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.462513 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d4ed727c-f4d1-47cd-a218-e22803eb1750-cni-binary-copy\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.462534 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f27c973f-d487-4b38-8921-f9c96635219e-system-cni-dir\") pod \"multus-additional-cni-plugins-l7nwj\" (UID: \"f27c973f-d487-4b38-8921-f9c96635219e\") " pod="openshift-multus/multus-additional-cni-plugins-l7nwj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.462555 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c-proxy-tls\") pod \"machine-config-daemon-t9hh9\" (UID: \"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\") " pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.462539 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-multus-conf-dir\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.462588 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-host-run-k8s-cni-cncf-io\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.462634 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-host-run-multus-certs\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.462641 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-host-run-k8s-cni-cncf-io\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.462656 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f27c973f-d487-4b38-8921-f9c96635219e-cnibin\") pod \"multus-additional-cni-plugins-l7nwj\" (UID: \"f27c973f-d487-4b38-8921-f9c96635219e\") " pod="openshift-multus/multus-additional-cni-plugins-l7nwj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.462672 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f27c973f-d487-4b38-8921-f9c96635219e-os-release\") pod \"multus-additional-cni-plugins-l7nwj\" (UID: \"f27c973f-d487-4b38-8921-f9c96635219e\") " pod="openshift-multus/multus-additional-cni-plugins-l7nwj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.462688 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f27c973f-d487-4b38-8921-f9c96635219e-cni-binary-copy\") pod \"multus-additional-cni-plugins-l7nwj\" (UID: \"f27c973f-d487-4b38-8921-f9c96635219e\") " pod="openshift-multus/multus-additional-cni-plugins-l7nwj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.462705 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f27c973f-d487-4b38-8921-f9c96635219e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-l7nwj\" (UID: \"f27c973f-d487-4b38-8921-f9c96635219e\") " pod="openshift-multus/multus-additional-cni-plugins-l7nwj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.462741 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-etc-kubernetes\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.462757 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-host-var-lib-cni-bin\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.462801 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-host-var-lib-cni-multus\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.462823 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-host-var-lib-kubelet\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.462842 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c-mcd-auth-proxy-config\") pod \"machine-config-daemon-t9hh9\" (UID: \"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\") " pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.462863 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-system-cni-dir\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.462897 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-cnibin\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.462914 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-host-run-netns\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.462930 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d4ed727c-f4d1-47cd-a218-e22803eb1750-multus-daemon-config\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.462948 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c-rootfs\") pod \"machine-config-daemon-t9hh9\" (UID: \"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\") " pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.462972 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcldt\" (UniqueName: \"kubernetes.io/projected/ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c-kube-api-access-hcldt\") pod \"machine-config-daemon-t9hh9\" (UID: \"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\") " pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.463001 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-multus-cni-dir\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.463020 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-multus-socket-dir-parent\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.463034 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f27c973f-d487-4b38-8921-f9c96635219e-cnibin\") pod \"multus-additional-cni-plugins-l7nwj\" (UID: \"f27c973f-d487-4b38-8921-f9c96635219e\") " pod="openshift-multus/multus-additional-cni-plugins-l7nwj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.463066 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-hostroot\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.463043 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-hostroot\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.463093 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-host-var-lib-cni-bin\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.463112 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v45p2\" (UniqueName: \"kubernetes.io/projected/f27c973f-d487-4b38-8921-f9c96635219e-kube-api-access-v45p2\") pod \"multus-additional-cni-plugins-l7nwj\" (UID: \"f27c973f-d487-4b38-8921-f9c96635219e\") " pod="openshift-multus/multus-additional-cni-plugins-l7nwj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.463137 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-host-var-lib-kubelet\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.463190 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-system-cni-dir\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.463222 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-cnibin\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.463246 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-host-run-netns\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.463273 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f27c973f-d487-4b38-8921-f9c96635219e-system-cni-dir\") pod \"multus-additional-cni-plugins-l7nwj\" (UID: \"f27c973f-d487-4b38-8921-f9c96635219e\") " pod="openshift-multus/multus-additional-cni-plugins-l7nwj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.462941 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-etc-kubernetes\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.463321 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f27c973f-d487-4b38-8921-f9c96635219e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-l7nwj\" (UID: \"f27c973f-d487-4b38-8921-f9c96635219e\") " pod="openshift-multus/multus-additional-cni-plugins-l7nwj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.463345 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c-rootfs\") pod \"machine-config-daemon-t9hh9\" (UID: \"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\") " pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.463974 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d4ed727c-f4d1-47cd-a218-e22803eb1750-multus-daemon-config\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.464001 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d4ed727c-f4d1-47cd-a218-e22803eb1750-cni-binary-copy\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.464071 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c-mcd-auth-proxy-config\") pod \"machine-config-daemon-t9hh9\" (UID: \"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\") " pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.463116 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-host-var-lib-cni-multus\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.463002 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-host-run-multus-certs\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.464186 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f27c973f-d487-4b38-8921-f9c96635219e-os-release\") pod \"multus-additional-cni-plugins-l7nwj\" (UID: \"f27c973f-d487-4b38-8921-f9c96635219e\") " pod="openshift-multus/multus-additional-cni-plugins-l7nwj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.464211 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f27c973f-d487-4b38-8921-f9c96635219e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-l7nwj\" (UID: \"f27c973f-d487-4b38-8921-f9c96635219e\") " pod="openshift-multus/multus-additional-cni-plugins-l7nwj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.464229 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-multus-cni-dir\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.464284 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-multus-socket-dir-parent\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.464366 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d4ed727c-f4d1-47cd-a218-e22803eb1750-os-release\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.464727 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f27c973f-d487-4b38-8921-f9c96635219e-cni-binary-copy\") pod \"multus-additional-cni-plugins-l7nwj\" (UID: \"f27c973f-d487-4b38-8921-f9c96635219e\") " pod="openshift-multus/multus-additional-cni-plugins-l7nwj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.468484 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.475040 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c-proxy-tls\") pod \"machine-config-daemon-t9hh9\" (UID: \"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\") " pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.496101 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.506224 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v45p2\" (UniqueName: \"kubernetes.io/projected/f27c973f-d487-4b38-8921-f9c96635219e-kube-api-access-v45p2\") pod \"multus-additional-cni-plugins-l7nwj\" (UID: \"f27c973f-d487-4b38-8921-f9c96635219e\") " pod="openshift-multus/multus-additional-cni-plugins-l7nwj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.506594 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcldt\" (UniqueName: \"kubernetes.io/projected/ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c-kube-api-access-hcldt\") pod \"machine-config-daemon-t9hh9\" (UID: \"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\") " pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.511165 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn97f\" (UniqueName: \"kubernetes.io/projected/d4ed727c-f4d1-47cd-a218-e22803eb1750-kube-api-access-gn97f\") pod \"multus-kbgwl\" (UID: \"d4ed727c-f4d1-47cd-a218-e22803eb1750\") " pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.533669 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.552992 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.553097 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.564361 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kbgwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.566574 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.576700 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.578995 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" Oct 14 13:15:06 crc kubenswrapper[4725]: W1014 13:15:06.583663 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4ed727c_f4d1_47cd_a218_e22803eb1750.slice/crio-80006f2292e739199c1f07864aaaf87a0c28b081fc9afaaca03af504da848705 WatchSource:0}: Error finding container 80006f2292e739199c1f07864aaaf87a0c28b081fc9afaaca03af504da848705: Status 404 returned error can't find the container with id 80006f2292e739199c1f07864aaaf87a0c28b081fc9afaaca03af504da848705 Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.596583 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.611219 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.619795 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9v9qj"] Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.620624 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.622920 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.623319 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.623360 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.623414 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.623495 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.623508 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.623651 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.627011 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.638274 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.653714 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ba77f6cadef4a84c4cc13ac9e1a86d31f32655aa5ec68b1b948fed3cadf3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:14:59Z\\\",\\\"message\\\":\\\"W1014 13:14:49.096766 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1014 13:14:49.097281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1760447689 cert, and key in /tmp/serving-cert-1900757213/serving-signer.crt, /tmp/serving-cert-1900757213/serving-signer.key\\\\nI1014 13:14:49.460858 1 observer_polling.go:159] Starting file observer\\\\nW1014 13:14:49.464193 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 13:14:49.464428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:14:49.465360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1900757213/tls.crt::/tmp/serving-cert-1900757213/tls.key\\\\\\\"\\\\nF1014 13:14:59.791051 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.664556 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.664664 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.664701 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.664725 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.664747 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:06 crc kubenswrapper[4725]: E1014 13:15:06.664875 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 13:15:06 crc kubenswrapper[4725]: E1014 13:15:06.664891 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 13:15:06 crc kubenswrapper[4725]: E1014 13:15:06.664902 4725 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:15:06 crc kubenswrapper[4725]: E1014 13:15:06.664946 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-14 13:15:07.664932692 +0000 UTC m=+24.513367501 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:15:06 crc kubenswrapper[4725]: E1014 13:15:06.664999 4725 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 13:15:06 crc kubenswrapper[4725]: E1014 13:15:06.665075 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 13:15:07.665052686 +0000 UTC m=+24.513487555 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 13:15:06 crc kubenswrapper[4725]: E1014 13:15:06.665153 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:15:07.665143559 +0000 UTC m=+24.513578368 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:15:06 crc kubenswrapper[4725]: E1014 13:15:06.665233 4725 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 13:15:06 crc kubenswrapper[4725]: E1014 13:15:06.665259 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 13:15:06 crc kubenswrapper[4725]: E1014 13:15:06.665273 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 13:15:07.665265822 +0000 UTC m=+24.513700631 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 13:15:06 crc kubenswrapper[4725]: E1014 13:15:06.665277 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 13:15:06 crc kubenswrapper[4725]: E1014 13:15:06.665292 4725 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:15:06 crc kubenswrapper[4725]: E1014 13:15:06.665353 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-14 13:15:07.665342954 +0000 UTC m=+24.513777823 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.668910 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.688231 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.700239 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.715794 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.739014 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.755746 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ba77f6cadef4a84c4cc13ac9e1a86d31f32655aa5ec68b1b948fed3cadf3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:14:59Z\\\",\\\"message\\\":\\\"W1014 13:14:49.096766 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1014 13:14:49.097281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1760447689 cert, and key in /tmp/serving-cert-1900757213/serving-signer.crt, /tmp/serving-cert-1900757213/serving-signer.key\\\\nI1014 13:14:49.460858 1 observer_polling.go:159] Starting file observer\\\\nW1014 13:14:49.464193 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 13:14:49.464428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:14:49.465360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1900757213/tls.crt::/tmp/serving-cert-1900757213/tls.key\\\\\\\"\\\\nF1014 13:14:59.791051 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.765665 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-var-lib-openvswitch\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.765716 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-log-socket\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.765745 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/38d54d71-93d1-4cde-940e-a371117f59bd-ovnkube-script-lib\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.765768 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-cni-netd\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.765801 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-run-systemd\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.765818 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/38d54d71-93d1-4cde-940e-a371117f59bd-ovnkube-config\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.765834 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-run-ovn\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.765860 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.765876 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/38d54d71-93d1-4cde-940e-a371117f59bd-env-overrides\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.766024 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-run-openvswitch\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.766047 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h8xd\" (UniqueName: \"kubernetes.io/projected/38d54d71-93d1-4cde-940e-a371117f59bd-kube-api-access-7h8xd\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.766068 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-node-log\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.766160 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-slash\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.766357 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-run-netns\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.766631 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-kubelet\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.766860 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-cni-bin\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.766928 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-systemd-units\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.766953 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-etc-openvswitch\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.766981 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-run-ovn-kubernetes\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.767011 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/38d54d71-93d1-4cde-940e-a371117f59bd-ovn-node-metrics-cert\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.767639 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.781285 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.795824 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.807717 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.821925 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.841801 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.858420 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.867801 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-run-openvswitch\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.867839 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h8xd\" (UniqueName: \"kubernetes.io/projected/38d54d71-93d1-4cde-940e-a371117f59bd-kube-api-access-7h8xd\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.867858 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-node-log\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.867876 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-run-netns\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.867890 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-slash\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.867907 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-kubelet\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.867922 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-cni-bin\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.867948 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-run-ovn-kubernetes\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.867962 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/38d54d71-93d1-4cde-940e-a371117f59bd-ovn-node-metrics-cert\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.867980 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-systemd-units\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.867994 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-etc-openvswitch\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.868010 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-var-lib-openvswitch\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.868025 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-log-socket\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.868040 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/38d54d71-93d1-4cde-940e-a371117f59bd-ovnkube-script-lib\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.868055 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-cni-netd\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.868073 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-run-systemd\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.868087 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/38d54d71-93d1-4cde-940e-a371117f59bd-ovnkube-config\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.868108 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-run-ovn\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.868134 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/38d54d71-93d1-4cde-940e-a371117f59bd-env-overrides\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.868163 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.868230 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.868274 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-run-openvswitch\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.868469 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-etc-openvswitch\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.868487 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-kubelet\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.868529 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-run-ovn-kubernetes\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.868573 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-var-lib-openvswitch\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.868577 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-run-netns\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.868556 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-node-log\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.868608 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-log-socket\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.868602 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-cni-bin\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.868639 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-slash\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.868759 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-run-systemd\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.868784 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-run-ovn\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.869020 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-systemd-units\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.869074 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-cni-netd\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.869357 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/38d54d71-93d1-4cde-940e-a371117f59bd-ovnkube-script-lib\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.869479 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/38d54d71-93d1-4cde-940e-a371117f59bd-ovnkube-config\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.869546 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/38d54d71-93d1-4cde-940e-a371117f59bd-env-overrides\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.872089 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/38d54d71-93d1-4cde-940e-a371117f59bd-ovn-node-metrics-cert\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.884235 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h8xd\" (UniqueName: \"kubernetes.io/projected/38d54d71-93d1-4cde-940e-a371117f59bd-kube-api-access-7h8xd\") pod \"ovnkube-node-9v9qj\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: I1014 13:15:06.952999 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:06 crc kubenswrapper[4725]: W1014 13:15:06.974761 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38d54d71_93d1_4cde_940e_a371117f59bd.slice/crio-194be38e6bd2bf02aa98279e4967036f2f2beb4448651b5b3108f5f6e8e21012 WatchSource:0}: Error finding container 194be38e6bd2bf02aa98279e4967036f2f2beb4448651b5b3108f5f6e8e21012: Status 404 returned error can't find the container with id 194be38e6bd2bf02aa98279e4967036f2f2beb4448651b5b3108f5f6e8e21012 Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.082570 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5"} Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.082629 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f"} Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.082648 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4ef3cee01d77c8f44b6847b27e1e9b33742703ef6239085de928c373a81faf61"} Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.083433 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"3255fd6ff655da3a21977050bc6e24d783fb3c9f0ad7ee0d48fc9afe8178767a"} Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.084938 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.085419 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.087414 4725 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987" exitCode=255 Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.087511 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987"} Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.087564 4725 scope.go:117] "RemoveContainer" containerID="54ba77f6cadef4a84c4cc13ac9e1a86d31f32655aa5ec68b1b948fed3cadf3e2" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.087963 4725 scope.go:117] "RemoveContainer" containerID="ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987" Oct 14 13:15:07 crc kubenswrapper[4725]: E1014 13:15:07.088100 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.089245 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kbgwl" event={"ID":"d4ed727c-f4d1-47cd-a218-e22803eb1750","Type":"ContainerStarted","Data":"e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6"} Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.089279 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kbgwl" event={"ID":"d4ed727c-f4d1-47cd-a218-e22803eb1750","Type":"ContainerStarted","Data":"80006f2292e739199c1f07864aaaf87a0c28b081fc9afaaca03af504da848705"} Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.097490 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" event={"ID":"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c","Type":"ContainerStarted","Data":"7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746"} Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.097975 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" event={"ID":"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c","Type":"ContainerStarted","Data":"6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c"} Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.097990 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" event={"ID":"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c","Type":"ContainerStarted","Data":"d16d440c48ea78c52e4e0c6ccdb319d7035d30474b481b2ee7a72a28a50e9544"} Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.099665 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-n9mfx" event={"ID":"768af815-9351-4b60-a9de-9f188049acd8","Type":"ContainerStarted","Data":"2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e"} Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.099690 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-n9mfx" event={"ID":"768af815-9351-4b60-a9de-9f188049acd8","Type":"ContainerStarted","Data":"deddff0e27d1b445efa924033852c26ffe862df3ca28460a61d1f27536449cfb"} Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.101483 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4"} Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.101538 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a30caf3a958b83363f28fb1d0031f1e354817b024f6cda0d3241522bfe7a0c12"} Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.102711 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" event={"ID":"38d54d71-93d1-4cde-940e-a371117f59bd","Type":"ContainerStarted","Data":"194be38e6bd2bf02aa98279e4967036f2f2beb4448651b5b3108f5f6e8e21012"} Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.103327 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ba77f6cadef4a84c4cc13ac9e1a86d31f32655aa5ec68b1b948fed3cadf3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:14:59Z\\\",\\\"message\\\":\\\"W1014 13:14:49.096766 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1014 13:14:49.097281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1760447689 cert, and key in /tmp/serving-cert-1900757213/serving-signer.crt, /tmp/serving-cert-1900757213/serving-signer.key\\\\nI1014 13:14:49.460858 1 observer_polling.go:159] Starting file observer\\\\nW1014 13:14:49.464193 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 13:14:49.464428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:14:49.465360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1900757213/tls.crt::/tmp/serving-cert-1900757213/tls.key\\\\\\\"\\\\nF1014 13:14:59.791051 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.104126 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" event={"ID":"f27c973f-d487-4b38-8921-f9c96635219e","Type":"ContainerStarted","Data":"ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7"} Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.104177 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" event={"ID":"f27c973f-d487-4b38-8921-f9c96635219e","Type":"ContainerStarted","Data":"f17ff84277b6099be1f30772aab88fc2ea48fe3fa32b3cca8b891650cdf71d10"} Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.115634 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.127635 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.139190 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.149403 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.170535 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.185322 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.199090 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.210420 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.219756 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.227883 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.243624 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.253509 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.263965 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.273081 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.284345 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.293167 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.303663 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.312201 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.320534 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.333215 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.351075 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.366703 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ba77f6cadef4a84c4cc13ac9e1a86d31f32655aa5ec68b1b948fed3cadf3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:14:59Z\\\",\\\"message\\\":\\\"W1014 13:14:49.096766 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1014 13:14:49.097281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1760447689 cert, and key in /tmp/serving-cert-1900757213/serving-signer.crt, /tmp/serving-cert-1900757213/serving-signer.key\\\\nI1014 13:14:49.460858 1 observer_polling.go:159] Starting file observer\\\\nW1014 13:14:49.464193 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 13:14:49.464428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:14:49.465360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1900757213/tls.crt::/tmp/serving-cert-1900757213/tls.key\\\\\\\"\\\\nF1014 13:14:59.791051 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.378964 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.680888 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:15:07 crc kubenswrapper[4725]: E1014 13:15:07.681099 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:15:09.681071521 +0000 UTC m=+26.529506330 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.681405 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:07 crc kubenswrapper[4725]: E1014 13:15:07.681560 4725 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 13:15:07 crc kubenswrapper[4725]: E1014 13:15:07.681618 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 13:15:09.681609445 +0000 UTC m=+26.530044244 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.681707 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.681796 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.681872 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:07 crc kubenswrapper[4725]: E1014 13:15:07.681741 4725 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 13:15:07 crc kubenswrapper[4725]: E1014 13:15:07.681976 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 13:15:09.681967966 +0000 UTC m=+26.530402775 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 13:15:07 crc kubenswrapper[4725]: E1014 13:15:07.682039 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 13:15:07 crc kubenswrapper[4725]: E1014 13:15:07.682050 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 13:15:07 crc kubenswrapper[4725]: E1014 13:15:07.682060 4725 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:15:07 crc kubenswrapper[4725]: E1014 13:15:07.682084 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-14 13:15:09.682077569 +0000 UTC m=+26.530512378 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:15:07 crc kubenswrapper[4725]: E1014 13:15:07.682499 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 13:15:07 crc kubenswrapper[4725]: E1014 13:15:07.682609 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 13:15:07 crc kubenswrapper[4725]: E1014 13:15:07.682693 4725 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:15:07 crc kubenswrapper[4725]: E1014 13:15:07.682833 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-14 13:15:09.682814428 +0000 UTC m=+26.531249307 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.724257 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-8fjcf"] Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.724749 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8fjcf" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.726700 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.727241 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.727502 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.727715 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.736244 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.749123 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.766307 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.776946 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.786021 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.801433 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.809166 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.821109 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.828877 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.849840 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.883618 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db996ea5-a3bf-4db3-a0df-fdd640228c83-host\") pod \"node-ca-8fjcf\" (UID: \"db996ea5-a3bf-4db3-a0df-fdd640228c83\") " pod="openshift-image-registry/node-ca-8fjcf" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.883661 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/db996ea5-a3bf-4db3-a0df-fdd640228c83-serviceca\") pod \"node-ca-8fjcf\" (UID: \"db996ea5-a3bf-4db3-a0df-fdd640228c83\") " pod="openshift-image-registry/node-ca-8fjcf" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.883699 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnnbq\" (UniqueName: \"kubernetes.io/projected/db996ea5-a3bf-4db3-a0df-fdd640228c83-kube-api-access-dnnbq\") pod \"node-ca-8fjcf\" (UID: \"db996ea5-a3bf-4db3-a0df-fdd640228c83\") " pod="openshift-image-registry/node-ca-8fjcf" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.894421 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.920555 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.920610 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.920670 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:07 crc kubenswrapper[4725]: E1014 13:15:07.920707 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:15:07 crc kubenswrapper[4725]: E1014 13:15:07.920981 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:15:07 crc kubenswrapper[4725]: E1014 13:15:07.921068 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.924939 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.925642 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.926936 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.927752 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.929012 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.929614 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.930330 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.931510 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.932214 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.933309 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.934020 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.937171 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.937866 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ba77f6cadef4a84c4cc13ac9e1a86d31f32655aa5ec68b1b948fed3cadf3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:14:59Z\\\",\\\"message\\\":\\\"W1014 13:14:49.096766 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1014 13:14:49.097281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1760447689 cert, and key in /tmp/serving-cert-1900757213/serving-signer.crt, /tmp/serving-cert-1900757213/serving-signer.key\\\\nI1014 13:14:49.460858 1 observer_polling.go:159] Starting file observer\\\\nW1014 13:14:49.464193 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 13:14:49.464428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:14:49.465360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1900757213/tls.crt::/tmp/serving-cert-1900757213/tls.key\\\\\\\"\\\\nF1014 13:14:59.791051 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.938006 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.938692 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.943313 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.944042 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.945518 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.946045 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.946754 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.956601 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.957304 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.958530 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.959043 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.960311 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.960979 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.961782 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.963301 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.963953 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.965268 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.965884 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.970150 4725 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.970314 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.972114 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.972576 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.978515 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.979414 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.981285 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.982263 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.983476 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.984253 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.984478 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnnbq\" (UniqueName: \"kubernetes.io/projected/db996ea5-a3bf-4db3-a0df-fdd640228c83-kube-api-access-dnnbq\") pod \"node-ca-8fjcf\" (UID: \"db996ea5-a3bf-4db3-a0df-fdd640228c83\") " pod="openshift-image-registry/node-ca-8fjcf" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.984561 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db996ea5-a3bf-4db3-a0df-fdd640228c83-host\") pod \"node-ca-8fjcf\" (UID: \"db996ea5-a3bf-4db3-a0df-fdd640228c83\") " pod="openshift-image-registry/node-ca-8fjcf" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.984580 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/db996ea5-a3bf-4db3-a0df-fdd640228c83-serviceca\") pod \"node-ca-8fjcf\" (UID: \"db996ea5-a3bf-4db3-a0df-fdd640228c83\") " pod="openshift-image-registry/node-ca-8fjcf" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.984632 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db996ea5-a3bf-4db3-a0df-fdd640228c83-host\") pod \"node-ca-8fjcf\" (UID: \"db996ea5-a3bf-4db3-a0df-fdd640228c83\") " pod="openshift-image-registry/node-ca-8fjcf" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.985516 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/db996ea5-a3bf-4db3-a0df-fdd640228c83-serviceca\") pod \"node-ca-8fjcf\" (UID: \"db996ea5-a3bf-4db3-a0df-fdd640228c83\") " pod="openshift-image-registry/node-ca-8fjcf" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.989121 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.989727 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.991169 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.991967 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.993244 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.993824 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.994999 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.995804 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.997230 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.997865 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.998710 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Oct 14 13:15:07 crc kubenswrapper[4725]: I1014 13:15:07.999256 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.000342 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.001025 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.001633 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.011363 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnnbq\" (UniqueName: \"kubernetes.io/projected/db996ea5-a3bf-4db3-a0df-fdd640228c83-kube-api-access-dnnbq\") pod \"node-ca-8fjcf\" (UID: \"db996ea5-a3bf-4db3-a0df-fdd640228c83\") " pod="openshift-image-registry/node-ca-8fjcf" Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.042494 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8fjcf" Oct 14 13:15:08 crc kubenswrapper[4725]: W1014 13:15:08.054935 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb996ea5_a3bf_4db3_a0df_fdd640228c83.slice/crio-ed7f527d27ab4dbd4060c4e39e9d1ff40fd162ef707dbafe3a648e2d6d1993d0 WatchSource:0}: Error finding container ed7f527d27ab4dbd4060c4e39e9d1ff40fd162ef707dbafe3a648e2d6d1993d0: Status 404 returned error can't find the container with id ed7f527d27ab4dbd4060c4e39e9d1ff40fd162ef707dbafe3a648e2d6d1993d0 Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.109189 4725 generic.go:334] "Generic (PLEG): container finished" podID="f27c973f-d487-4b38-8921-f9c96635219e" containerID="ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7" exitCode=0 Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.109274 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" event={"ID":"f27c973f-d487-4b38-8921-f9c96635219e","Type":"ContainerDied","Data":"ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7"} Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.112930 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.118466 4725 scope.go:117] "RemoveContainer" containerID="ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987" Oct 14 13:15:08 crc kubenswrapper[4725]: E1014 13:15:08.118606 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.119542 4725 generic.go:334] "Generic (PLEG): container finished" podID="38d54d71-93d1-4cde-940e-a371117f59bd" containerID="e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b" exitCode=0 Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.119608 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" event={"ID":"38d54d71-93d1-4cde-940e-a371117f59bd","Type":"ContainerDied","Data":"e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b"} Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.122521 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8fjcf" event={"ID":"db996ea5-a3bf-4db3-a0df-fdd640228c83","Type":"ContainerStarted","Data":"ed7f527d27ab4dbd4060c4e39e9d1ff40fd162ef707dbafe3a648e2d6d1993d0"} Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.123181 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:08Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.138024 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:08Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.152769 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:08Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.172393 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:08Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.206552 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:08Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.234260 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:08Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.271878 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:08Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.318443 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:08Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.352015 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:08Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.392951 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:08Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.433524 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:08Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.473628 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:08Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.515474 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ba77f6cadef4a84c4cc13ac9e1a86d31f32655aa5ec68b1b948fed3cadf3e2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:14:59Z\\\",\\\"message\\\":\\\"W1014 13:14:49.096766 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1014 13:14:49.097281 1 crypto.go:601] Generating new CA for check-endpoints-signer@1760447689 cert, and key in /tmp/serving-cert-1900757213/serving-signer.crt, /tmp/serving-cert-1900757213/serving-signer.key\\\\nI1014 13:14:49.460858 1 observer_polling.go:159] Starting file observer\\\\nW1014 13:14:49.464193 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1014 13:14:49.464428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:14:49.465360 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1900757213/tls.crt::/tmp/serving-cert-1900757213/tls.key\\\\\\\"\\\\nF1014 13:14:59.791051 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:08Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.551037 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:08Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.595761 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:08Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.639141 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:08Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.673506 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:08Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.714073 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:08Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.758218 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:08Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.799168 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:08Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.831957 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:08Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.875707 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:08Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.922168 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:08Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.957330 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:08Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:08 crc kubenswrapper[4725]: I1014 13:15:08.996810 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:08Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.037365 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:09Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.128927 4725 generic.go:334] "Generic (PLEG): container finished" podID="f27c973f-d487-4b38-8921-f9c96635219e" containerID="1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045" exitCode=0 Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.129010 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" event={"ID":"f27c973f-d487-4b38-8921-f9c96635219e","Type":"ContainerDied","Data":"1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045"} Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.130992 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93"} Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.135716 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" event={"ID":"38d54d71-93d1-4cde-940e-a371117f59bd","Type":"ContainerStarted","Data":"91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109"} Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.135761 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" event={"ID":"38d54d71-93d1-4cde-940e-a371117f59bd","Type":"ContainerStarted","Data":"50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916"} Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.135775 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" event={"ID":"38d54d71-93d1-4cde-940e-a371117f59bd","Type":"ContainerStarted","Data":"76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11"} Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.135787 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" event={"ID":"38d54d71-93d1-4cde-940e-a371117f59bd","Type":"ContainerStarted","Data":"fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d"} Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.135797 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" event={"ID":"38d54d71-93d1-4cde-940e-a371117f59bd","Type":"ContainerStarted","Data":"bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e"} Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.135805 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" event={"ID":"38d54d71-93d1-4cde-940e-a371117f59bd","Type":"ContainerStarted","Data":"06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08"} Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.136746 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8fjcf" event={"ID":"db996ea5-a3bf-4db3-a0df-fdd640228c83","Type":"ContainerStarted","Data":"1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497"} Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.145426 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:09Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.159157 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:09Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.174178 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:09Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.194405 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:09Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.238169 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:09Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.275893 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:09Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.312508 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:09Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.356283 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:09Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.395017 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:09Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.434426 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:09Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.473059 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:09Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.520103 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:09Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.555617 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:09Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.594569 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:09Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.634076 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:09Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.676850 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:09Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.702352 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.702439 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.702491 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.702517 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.702544 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:09 crc kubenswrapper[4725]: E1014 13:15:09.702638 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:15:13.702610375 +0000 UTC m=+30.551045184 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:15:09 crc kubenswrapper[4725]: E1014 13:15:09.702640 4725 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 13:15:09 crc kubenswrapper[4725]: E1014 13:15:09.702711 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 13:15:13.702701457 +0000 UTC m=+30.551136266 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 13:15:09 crc kubenswrapper[4725]: E1014 13:15:09.702725 4725 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 13:15:09 crc kubenswrapper[4725]: E1014 13:15:09.702762 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 13:15:09 crc kubenswrapper[4725]: E1014 13:15:09.702787 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 13:15:09 crc kubenswrapper[4725]: E1014 13:15:09.702799 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 13:15:09 crc kubenswrapper[4725]: E1014 13:15:09.702801 4725 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:15:09 crc kubenswrapper[4725]: E1014 13:15:09.702812 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 13:15:09 crc kubenswrapper[4725]: E1014 13:15:09.702821 4725 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:15:09 crc kubenswrapper[4725]: E1014 13:15:09.702813 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 13:15:13.70279166 +0000 UTC m=+30.551226539 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 13:15:09 crc kubenswrapper[4725]: E1014 13:15:09.702857 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-14 13:15:13.702845111 +0000 UTC m=+30.551279920 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:15:09 crc kubenswrapper[4725]: E1014 13:15:09.702868 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-14 13:15:13.702863292 +0000 UTC m=+30.551298101 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.729530 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:09Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.783290 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:09Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.802062 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:09Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.831044 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:09Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.874691 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:09Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.905136 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.905919 4725 scope.go:117] "RemoveContainer" containerID="ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987" Oct 14 13:15:09 crc kubenswrapper[4725]: E1014 13:15:09.906299 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.919411 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:09Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.920527 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.920562 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:09 crc kubenswrapper[4725]: E1014 13:15:09.920620 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:15:09 crc kubenswrapper[4725]: E1014 13:15:09.920682 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.920562 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:09 crc kubenswrapper[4725]: E1014 13:15:09.920778 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.951974 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:09Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:09 crc kubenswrapper[4725]: I1014 13:15:09.993541 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:09Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:10 crc kubenswrapper[4725]: I1014 13:15:10.034430 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:10Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:10 crc kubenswrapper[4725]: I1014 13:15:10.076881 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:10Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:10 crc kubenswrapper[4725]: I1014 13:15:10.142195 4725 generic.go:334] "Generic (PLEG): container finished" podID="f27c973f-d487-4b38-8921-f9c96635219e" containerID="3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f" exitCode=0 Oct 14 13:15:10 crc kubenswrapper[4725]: I1014 13:15:10.142282 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" event={"ID":"f27c973f-d487-4b38-8921-f9c96635219e","Type":"ContainerDied","Data":"3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f"} Oct 14 13:15:10 crc kubenswrapper[4725]: I1014 13:15:10.160763 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:10Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:10 crc kubenswrapper[4725]: I1014 13:15:10.176490 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:10Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:10 crc kubenswrapper[4725]: I1014 13:15:10.193959 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:10Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:10 crc kubenswrapper[4725]: I1014 13:15:10.234080 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:10Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:10 crc kubenswrapper[4725]: I1014 13:15:10.274966 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:10Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:10 crc kubenswrapper[4725]: I1014 13:15:10.315312 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:10Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:10 crc kubenswrapper[4725]: I1014 13:15:10.351538 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:10Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:10 crc kubenswrapper[4725]: I1014 13:15:10.395391 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:10Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:10 crc kubenswrapper[4725]: I1014 13:15:10.433679 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:10Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:10 crc kubenswrapper[4725]: I1014 13:15:10.472847 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:10Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:10 crc kubenswrapper[4725]: I1014 13:15:10.514082 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:10Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:10 crc kubenswrapper[4725]: I1014 13:15:10.560925 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:10Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:10 crc kubenswrapper[4725]: I1014 13:15:10.595433 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:10Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.148098 4725 generic.go:334] "Generic (PLEG): container finished" podID="f27c973f-d487-4b38-8921-f9c96635219e" containerID="74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e" exitCode=0 Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.148163 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" event={"ID":"f27c973f-d487-4b38-8921-f9c96635219e","Type":"ContainerDied","Data":"74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e"} Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.165926 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:11Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.180211 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:11Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.196993 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:11Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.212879 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:11Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.226583 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:11Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.240169 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:11Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.259356 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:11Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.270227 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:11Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.284552 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:11Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.297362 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:11Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.303481 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.307199 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.311011 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:11Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.311141 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.328806 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:11Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.343920 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:11Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.356837 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:11Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.374472 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:11Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.386105 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:11Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.397776 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:11Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.408075 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:11Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.420078 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:11Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.433515 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:11Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.453059 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:11Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.495091 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:11Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.534490 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:11Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.573654 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55e8261f-c97f-4d46-b31f-6ef737e6c4c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92d8ba988206535f4ad53245b92484c9cfaf86d2a768d171b6a56a3d71ce2855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c3fa61126ab1de3e82ff31cd2d54b72d6a4355db0dbe16a1300f5349ec032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76dc71de9a8769dc0878cc796ad53a4355d128d9d3011601d1e10127a390b87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968caf419d5ddbb9213c7325a25516bdec5108f010a0e3457faa716b6f6f8955\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:11Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.613624 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:11Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.637783 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.639493 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.639533 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.639543 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.639602 4725 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.658727 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:11Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.705122 4725 kubelet_node_status.go:115] "Node was previously registered" node="crc" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.705503 4725 kubelet_node_status.go:79] "Successfully registered node" node="crc" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.706583 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.706616 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.706629 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.706645 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.706657 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:11Z","lastTransitionTime":"2025-10-14T13:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:11 crc kubenswrapper[4725]: E1014 13:15:11.729398 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:11Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.732955 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.732987 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.732998 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.733013 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.733026 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:11Z","lastTransitionTime":"2025-10-14T13:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.734274 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:11Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:11 crc kubenswrapper[4725]: E1014 13:15:11.745618 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:11Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.748769 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.748797 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.748806 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.748822 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.748832 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:11Z","lastTransitionTime":"2025-10-14T13:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:11 crc kubenswrapper[4725]: E1014 13:15:11.758883 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:11Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.762638 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.762709 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.762724 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.762741 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.762754 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:11Z","lastTransitionTime":"2025-10-14T13:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:11 crc kubenswrapper[4725]: E1014 13:15:11.776355 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:11Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.779679 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.779722 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.779734 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.779750 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.779761 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:11Z","lastTransitionTime":"2025-10-14T13:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:11 crc kubenswrapper[4725]: E1014 13:15:11.792579 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:11Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:11 crc kubenswrapper[4725]: E1014 13:15:11.792707 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.794687 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.794740 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.794762 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.794792 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.794813 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:11Z","lastTransitionTime":"2025-10-14T13:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.896942 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.896979 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.896987 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.897003 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.897012 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:11Z","lastTransitionTime":"2025-10-14T13:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.921276 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.921360 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:11 crc kubenswrapper[4725]: E1014 13:15:11.921401 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:15:11 crc kubenswrapper[4725]: E1014 13:15:11.921526 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.921294 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:11 crc kubenswrapper[4725]: E1014 13:15:11.921874 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.999072 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.999100 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.999108 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.999121 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:11 crc kubenswrapper[4725]: I1014 13:15:11.999130 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:11Z","lastTransitionTime":"2025-10-14T13:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.102005 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.102051 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.102064 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.102081 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.102096 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:12Z","lastTransitionTime":"2025-10-14T13:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.155496 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" event={"ID":"38d54d71-93d1-4cde-940e-a371117f59bd","Type":"ContainerStarted","Data":"7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b"} Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.158240 4725 generic.go:334] "Generic (PLEG): container finished" podID="f27c973f-d487-4b38-8921-f9c96635219e" containerID="b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433" exitCode=0 Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.158310 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" event={"ID":"f27c973f-d487-4b38-8921-f9c96635219e","Type":"ContainerDied","Data":"b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433"} Oct 14 13:15:12 crc kubenswrapper[4725]: E1014 13:15:12.164995 4725 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.173415 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:12Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.187220 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:12Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.201000 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:12Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.204628 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.204669 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.204679 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.204694 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.204704 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:12Z","lastTransitionTime":"2025-10-14T13:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.218404 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:12Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.227511 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:12Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.238993 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:12Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.255867 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:12Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.266914 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:12Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.278681 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:12Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.296202 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:12Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.306841 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.306934 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.306958 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.306985 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.307007 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:12Z","lastTransitionTime":"2025-10-14T13:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.313860 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55e8261f-c97f-4d46-b31f-6ef737e6c4c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92d8ba988206535f4ad53245b92484c9cfaf86d2a768d171b6a56a3d71ce2855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c3fa61126ab1de3e82ff31cd2d54b72d6a4355db0dbe16a1300f5349ec032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76dc71de9a8769dc0878cc796ad53a4355d128d9d3011601d1e10127a390b87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968caf419d5ddbb9213c7325a25516bdec5108f010a0e3457faa716b6f6f8955\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:12Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.327206 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:12Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.339638 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:12Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.354231 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:12Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.409481 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.409715 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.409782 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.409840 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.409902 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:12Z","lastTransitionTime":"2025-10-14T13:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.511877 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.511920 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.511929 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.511946 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.511958 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:12Z","lastTransitionTime":"2025-10-14T13:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.614554 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.614593 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.614605 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.614623 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.614635 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:12Z","lastTransitionTime":"2025-10-14T13:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.717265 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.717299 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.717307 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.717319 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.717329 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:12Z","lastTransitionTime":"2025-10-14T13:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.819979 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.820046 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.820060 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.820078 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.820088 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:12Z","lastTransitionTime":"2025-10-14T13:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.923200 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.923260 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.923278 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.923303 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:12 crc kubenswrapper[4725]: I1014 13:15:12.923319 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:12Z","lastTransitionTime":"2025-10-14T13:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.025994 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.026036 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.026046 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.026064 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.026076 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:13Z","lastTransitionTime":"2025-10-14T13:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.129070 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.129112 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.129126 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.129146 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.129159 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:13Z","lastTransitionTime":"2025-10-14T13:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.166185 4725 generic.go:334] "Generic (PLEG): container finished" podID="f27c973f-d487-4b38-8921-f9c96635219e" containerID="07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82" exitCode=0 Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.166302 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" event={"ID":"f27c973f-d487-4b38-8921-f9c96635219e","Type":"ContainerDied","Data":"07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82"} Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.184340 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:13Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.202905 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:13Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.218777 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:13Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.233011 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.233058 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.233069 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.233085 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.233094 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:13Z","lastTransitionTime":"2025-10-14T13:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.241837 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:13Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.255881 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:13Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.278303 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:13Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.295281 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:13Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.314750 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55e8261f-c97f-4d46-b31f-6ef737e6c4c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92d8ba988206535f4ad53245b92484c9cfaf86d2a768d171b6a56a3d71ce2855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c3fa61126ab1de3e82ff31cd2d54b72d6a4355db0dbe16a1300f5349ec032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76dc71de9a8769dc0878cc796ad53a4355d128d9d3011601d1e10127a390b87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968caf419d5ddbb9213c7325a25516bdec5108f010a0e3457faa716b6f6f8955\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:13Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.329672 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:13Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.335869 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.335913 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.335926 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.335948 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.335963 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:13Z","lastTransitionTime":"2025-10-14T13:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.341932 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:13Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.354572 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:13Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.369636 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:13Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.381471 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:13Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.396320 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:13Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.438915 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.438951 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.438961 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.438978 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.438987 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:13Z","lastTransitionTime":"2025-10-14T13:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.542312 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.542363 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.542379 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.542399 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.542411 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:13Z","lastTransitionTime":"2025-10-14T13:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.645720 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.645753 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.645763 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.645779 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.645793 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:13Z","lastTransitionTime":"2025-10-14T13:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.746276 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.746396 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:13 crc kubenswrapper[4725]: E1014 13:15:13.746484 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:15:21.746441201 +0000 UTC m=+38.594876020 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:15:13 crc kubenswrapper[4725]: E1014 13:15:13.746508 4725 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.746526 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:13 crc kubenswrapper[4725]: E1014 13:15:13.746547 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 13:15:21.746536844 +0000 UTC m=+38.594971653 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.746563 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.746593 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:13 crc kubenswrapper[4725]: E1014 13:15:13.746669 4725 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 13:15:13 crc kubenswrapper[4725]: E1014 13:15:13.746716 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 13:15:13 crc kubenswrapper[4725]: E1014 13:15:13.746733 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 13:15:13 crc kubenswrapper[4725]: E1014 13:15:13.747053 4725 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:15:13 crc kubenswrapper[4725]: E1014 13:15:13.747121 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-14 13:15:21.747092059 +0000 UTC m=+38.595526868 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:15:13 crc kubenswrapper[4725]: E1014 13:15:13.747139 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 13:15:21.747131141 +0000 UTC m=+38.595565950 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 13:15:13 crc kubenswrapper[4725]: E1014 13:15:13.747259 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.747879 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.747944 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.747957 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:13 crc kubenswrapper[4725]: E1014 13:15:13.747819 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 13:15:13 crc kubenswrapper[4725]: E1014 13:15:13.748008 4725 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:15:13 crc kubenswrapper[4725]: E1014 13:15:13.748062 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-14 13:15:21.748037474 +0000 UTC m=+38.596472293 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.747979 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.748098 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:13Z","lastTransitionTime":"2025-10-14T13:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.850605 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.850637 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.850645 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.850660 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.850670 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:13Z","lastTransitionTime":"2025-10-14T13:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.920538 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.920593 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.920568 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:13 crc kubenswrapper[4725]: E1014 13:15:13.920699 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:15:13 crc kubenswrapper[4725]: E1014 13:15:13.920813 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:15:13 crc kubenswrapper[4725]: E1014 13:15:13.920904 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.936815 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:13Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.948753 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:13Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.952560 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.952591 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.952604 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.952621 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.952633 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:13Z","lastTransitionTime":"2025-10-14T13:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.964623 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:13Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.977125 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:13Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:13 crc kubenswrapper[4725]: I1014 13:15:13.989274 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:13Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.003842 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:13Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.030326 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.044880 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.055108 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.055306 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.055394 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.055492 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.055578 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:14Z","lastTransitionTime":"2025-10-14T13:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.060483 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.073591 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.088515 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55e8261f-c97f-4d46-b31f-6ef737e6c4c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92d8ba988206535f4ad53245b92484c9cfaf86d2a768d171b6a56a3d71ce2855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c3fa61126ab1de3e82ff31cd2d54b72d6a4355db0dbe16a1300f5349ec032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76dc71de9a8769dc0878cc796ad53a4355d128d9d3011601d1e10127a390b87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968caf419d5ddbb9213c7325a25516bdec5108f010a0e3457faa716b6f6f8955\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.102266 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.115627 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.128756 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.157894 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.157933 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.157941 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.157956 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.157967 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:14Z","lastTransitionTime":"2025-10-14T13:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.172978 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" event={"ID":"38d54d71-93d1-4cde-940e-a371117f59bd","Type":"ContainerStarted","Data":"097fe661717d6f5253245276ca95056fa832cd15a4b2e7bde79d596d186e8e0a"} Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.173296 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.178818 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" event={"ID":"f27c973f-d487-4b38-8921-f9c96635219e","Type":"ContainerStarted","Data":"f246f81af5a032ce9db26f4f99c6cbcc43eb7ff2674a8de64d385efea9ae280d"} Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.189225 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.205369 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.221857 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55e8261f-c97f-4d46-b31f-6ef737e6c4c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92d8ba988206535f4ad53245b92484c9cfaf86d2a768d171b6a56a3d71ce2855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c3fa61126ab1de3e82ff31cd2d54b72d6a4355db0dbe16a1300f5349ec032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76dc71de9a8769dc0878cc796ad53a4355d128d9d3011601d1e10127a390b87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968caf419d5ddbb9213c7325a25516bdec5108f010a0e3457faa716b6f6f8955\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.226350 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.234045 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.250018 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.262876 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.262925 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.262946 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.262970 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.262982 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:14Z","lastTransitionTime":"2025-10-14T13:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.268514 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.282024 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.298548 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://097fe661717d6f5253245276ca95056fa832cd15a4b2e7bde79d596d186e8e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.307730 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.321003 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.332653 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.343288 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.357701 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.366002 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.366046 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.366057 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.366076 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.366086 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:14Z","lastTransitionTime":"2025-10-14T13:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.370236 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.384312 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.397582 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.411114 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.423609 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.435895 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55e8261f-c97f-4d46-b31f-6ef737e6c4c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92d8ba988206535f4ad53245b92484c9cfaf86d2a768d171b6a56a3d71ce2855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c3fa61126ab1de3e82ff31cd2d54b72d6a4355db0dbe16a1300f5349ec032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76dc71de9a8769dc0878cc796ad53a4355d128d9d3011601d1e10127a390b87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968caf419d5ddbb9213c7325a25516bdec5108f010a0e3457faa716b6f6f8955\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.449844 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.466092 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f246f81af5a032ce9db26f4f99c6cbcc43eb7ff2674a8de64d385efea9ae280d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.467881 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.467910 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.467919 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.467932 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.467942 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:14Z","lastTransitionTime":"2025-10-14T13:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.480806 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.494386 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.515039 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://097fe661717d6f5253245276ca95056fa832cd15a4b2e7bde79d596d186e8e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.526560 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.539616 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.550306 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.563335 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.570059 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.570089 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.570099 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.570114 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.570123 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:14Z","lastTransitionTime":"2025-10-14T13:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.672615 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.672682 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.672693 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.672709 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.672720 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:14Z","lastTransitionTime":"2025-10-14T13:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.775082 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.775130 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.775139 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.775152 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.775162 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:14Z","lastTransitionTime":"2025-10-14T13:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.877471 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.877512 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.877524 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.877540 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.877552 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:14Z","lastTransitionTime":"2025-10-14T13:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.979794 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.979827 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.979839 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.979854 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:14 crc kubenswrapper[4725]: I1014 13:15:14.979865 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:14Z","lastTransitionTime":"2025-10-14T13:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.082586 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.082621 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.082629 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.082646 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.082655 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:15Z","lastTransitionTime":"2025-10-14T13:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.181845 4725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.182939 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.184412 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.184472 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.184481 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.184494 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.184504 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:15Z","lastTransitionTime":"2025-10-14T13:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.216390 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.229750 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:15Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.243880 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:15Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.263350 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f246f81af5a032ce9db26f4f99c6cbcc43eb7ff2674a8de64d385efea9ae280d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:15Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.276088 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:15Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.287567 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.287607 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.287617 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.287632 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.287643 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:15Z","lastTransitionTime":"2025-10-14T13:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.290156 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:15Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.305346 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:15Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.337801 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://097fe661717d6f5253245276ca95056fa832cd15a4b2e7bde79d596d186e8e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:15Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.350716 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:15Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.372063 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:15Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.385291 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:15Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.389927 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.389976 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.389994 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.390020 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.390038 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:15Z","lastTransitionTime":"2025-10-14T13:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.398717 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55e8261f-c97f-4d46-b31f-6ef737e6c4c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92d8ba988206535f4ad53245b92484c9cfaf86d2a768d171b6a56a3d71ce2855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c3fa61126ab1de3e82ff31cd2d54b72d6a4355db0dbe16a1300f5349ec032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76dc71de9a8769dc0878cc796ad53a4355d128d9d3011601d1e10127a390b87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968caf419d5ddbb9213c7325a25516bdec5108f010a0e3457faa716b6f6f8955\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:15Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.414409 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:15Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.438298 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:15Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.459904 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:15Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.492029 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.492065 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.492074 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.492088 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.492097 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:15Z","lastTransitionTime":"2025-10-14T13:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.594991 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.595046 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.595065 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.595102 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.595126 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:15Z","lastTransitionTime":"2025-10-14T13:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.698326 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.698379 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.698396 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.698421 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.698438 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:15Z","lastTransitionTime":"2025-10-14T13:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.801813 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.801878 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.801897 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.801923 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.801940 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:15Z","lastTransitionTime":"2025-10-14T13:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.904343 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.904403 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.904420 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.904445 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.904495 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:15Z","lastTransitionTime":"2025-10-14T13:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.920673 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.920725 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:15 crc kubenswrapper[4725]: I1014 13:15:15.920753 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:15 crc kubenswrapper[4725]: E1014 13:15:15.920831 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:15:15 crc kubenswrapper[4725]: E1014 13:15:15.920965 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:15:15 crc kubenswrapper[4725]: E1014 13:15:15.921063 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.007703 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.007749 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.007760 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.007777 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.007791 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:16Z","lastTransitionTime":"2025-10-14T13:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.111302 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.111365 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.111378 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.111396 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.111409 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:16Z","lastTransitionTime":"2025-10-14T13:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.185566 4725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.213540 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.213578 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.213590 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.213609 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.213621 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:16Z","lastTransitionTime":"2025-10-14T13:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.315291 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.315341 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.315353 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.315370 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.315381 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:16Z","lastTransitionTime":"2025-10-14T13:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.417732 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.417784 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.417795 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.417813 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.417823 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:16Z","lastTransitionTime":"2025-10-14T13:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.521304 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.521376 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.521388 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.521411 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.521423 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:16Z","lastTransitionTime":"2025-10-14T13:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.627543 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.627591 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.627603 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.627619 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.627630 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:16Z","lastTransitionTime":"2025-10-14T13:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.729836 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.729868 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.729876 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.729891 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.729900 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:16Z","lastTransitionTime":"2025-10-14T13:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.832262 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.832300 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.832311 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.832329 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.832342 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:16Z","lastTransitionTime":"2025-10-14T13:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.934892 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.934946 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.934959 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.934977 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:16 crc kubenswrapper[4725]: I1014 13:15:16.934992 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:16Z","lastTransitionTime":"2025-10-14T13:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.038169 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.038246 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.038271 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.038303 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.038324 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:17Z","lastTransitionTime":"2025-10-14T13:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.140948 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.141003 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.141014 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.141036 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.141052 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:17Z","lastTransitionTime":"2025-10-14T13:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.199407 4725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.243954 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.243996 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.244012 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.244032 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.244049 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:17Z","lastTransitionTime":"2025-10-14T13:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.347184 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.347231 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.347243 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.347261 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.347275 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:17Z","lastTransitionTime":"2025-10-14T13:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.450409 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.450482 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.450507 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.450529 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.450543 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:17Z","lastTransitionTime":"2025-10-14T13:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.553096 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.553138 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.553152 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.553168 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.553177 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:17Z","lastTransitionTime":"2025-10-14T13:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.657283 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.657334 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.657342 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.657361 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.657372 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:17Z","lastTransitionTime":"2025-10-14T13:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.759758 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.759810 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.759830 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.759851 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.759867 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:17Z","lastTransitionTime":"2025-10-14T13:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.862711 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.862752 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.862764 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.862780 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.862793 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:17Z","lastTransitionTime":"2025-10-14T13:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.920961 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.920982 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:17 crc kubenswrapper[4725]: E1014 13:15:17.921129 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:15:17 crc kubenswrapper[4725]: E1014 13:15:17.921219 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.921498 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:17 crc kubenswrapper[4725]: E1014 13:15:17.921588 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.967065 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.967115 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.967124 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.967138 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:17 crc kubenswrapper[4725]: I1014 13:15:17.967147 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:17Z","lastTransitionTime":"2025-10-14T13:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.069098 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.069138 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.069148 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.069163 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.069174 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:18Z","lastTransitionTime":"2025-10-14T13:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.171545 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.171588 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.171599 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.171621 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.171634 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:18Z","lastTransitionTime":"2025-10-14T13:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.199009 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9v9qj_38d54d71-93d1-4cde-940e-a371117f59bd/ovnkube-controller/0.log" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.202154 4725 generic.go:334] "Generic (PLEG): container finished" podID="38d54d71-93d1-4cde-940e-a371117f59bd" containerID="097fe661717d6f5253245276ca95056fa832cd15a4b2e7bde79d596d186e8e0a" exitCode=1 Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.202192 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" event={"ID":"38d54d71-93d1-4cde-940e-a371117f59bd","Type":"ContainerDied","Data":"097fe661717d6f5253245276ca95056fa832cd15a4b2e7bde79d596d186e8e0a"} Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.202768 4725 scope.go:117] "RemoveContainer" containerID="097fe661717d6f5253245276ca95056fa832cd15a4b2e7bde79d596d186e8e0a" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.215430 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:18Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.234860 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:18Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.247541 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:18Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.267886 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:18Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.273939 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.273968 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.273981 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.273996 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.274005 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:18Z","lastTransitionTime":"2025-10-14T13:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.285425 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:18Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.304311 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55e8261f-c97f-4d46-b31f-6ef737e6c4c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92d8ba988206535f4ad53245b92484c9cfaf86d2a768d171b6a56a3d71ce2855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c3fa61126ab1de3e82ff31cd2d54b72d6a4355db0dbe16a1300f5349ec032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76dc71de9a8769dc0878cc796ad53a4355d128d9d3011601d1e10127a390b87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968caf419d5ddbb9213c7325a25516bdec5108f010a0e3457faa716b6f6f8955\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:18Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.320997 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:18Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.359481 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f246f81af5a032ce9db26f4f99c6cbcc43eb7ff2674a8de64d385efea9ae280d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:18Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.373133 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:18Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.375393 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.375538 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.375619 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.375694 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.375755 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:18Z","lastTransitionTime":"2025-10-14T13:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.385432 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:18Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.405409 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://097fe661717d6f5253245276ca95056fa832cd15a4b2e7bde79d596d186e8e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://097fe661717d6f5253245276ca95056fa832cd15a4b2e7bde79d596d186e8e0a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:15:17Z\\\",\\\"message\\\":\\\" 6066 factory.go:656] Stopping watch factory\\\\nI1014 13:15:16.786136 6066 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1014 13:15:16.786204 6066 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 13:15:16.786182 6066 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1014 13:15:16.786219 6066 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1014 13:15:16.786255 6066 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 13:15:16.786209 6066 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1014 13:15:16.786394 6066 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 13:15:16.786735 6066 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:18Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.417511 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:18Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.430859 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:18Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.449948 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:18Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.480990 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.481200 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.481391 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.481632 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.481709 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:18Z","lastTransitionTime":"2025-10-14T13:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.584751 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.584798 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.584810 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.584869 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.584887 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:18Z","lastTransitionTime":"2025-10-14T13:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.687942 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.688308 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.688321 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.688338 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.688350 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:18Z","lastTransitionTime":"2025-10-14T13:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.791119 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.791168 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.791183 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.791199 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.791211 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:18Z","lastTransitionTime":"2025-10-14T13:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.893866 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.893909 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.893924 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.893941 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.893954 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:18Z","lastTransitionTime":"2025-10-14T13:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.932237 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr"] Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.933323 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.936077 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.936155 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.955066 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:18Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.983314 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:18Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.997229 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.997299 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.997317 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.997340 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:18 crc kubenswrapper[4725]: I1014 13:15:18.997356 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:18Z","lastTransitionTime":"2025-10-14T13:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.000809 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmpdj\" (UniqueName: \"kubernetes.io/projected/ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d-kube-api-access-rmpdj\") pod \"ovnkube-control-plane-749d76644c-jbldr\" (UID: \"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.000898 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d-env-overrides\") pod \"ovnkube-control-plane-749d76644c-jbldr\" (UID: \"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.000925 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-jbldr\" (UID: \"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.000949 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-jbldr\" (UID: \"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.004718 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55e8261f-c97f-4d46-b31f-6ef737e6c4c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92d8ba988206535f4ad53245b92484c9cfaf86d2a768d171b6a56a3d71ce2855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c3fa61126ab1de3e82ff31cd2d54b72d6a4355db0dbe16a1300f5349ec032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76dc71de9a8769dc0878cc796ad53a4355d128d9d3011601d1e10127a390b87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968caf419d5ddbb9213c7325a25516bdec5108f010a0e3457faa716b6f6f8955\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:19Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.020406 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:19Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.080132 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f246f81af5a032ce9db26f4f99c6cbcc43eb7ff2674a8de64d385efea9ae280d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:19Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.100700 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.100746 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.100763 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.100788 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.100805 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:19Z","lastTransitionTime":"2025-10-14T13:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.111684 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmpdj\" (UniqueName: \"kubernetes.io/projected/ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d-kube-api-access-rmpdj\") pod \"ovnkube-control-plane-749d76644c-jbldr\" (UID: \"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.112297 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d-env-overrides\") pod \"ovnkube-control-plane-749d76644c-jbldr\" (UID: \"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.112325 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-jbldr\" (UID: \"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.112342 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-jbldr\" (UID: \"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.113275 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d-env-overrides\") pod \"ovnkube-control-plane-749d76644c-jbldr\" (UID: \"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.113405 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-jbldr\" (UID: \"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.118364 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:19Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.119004 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-jbldr\" (UID: \"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.132610 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmpdj\" (UniqueName: \"kubernetes.io/projected/ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d-kube-api-access-rmpdj\") pod \"ovnkube-control-plane-749d76644c-jbldr\" (UID: \"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.137235 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:19Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.158167 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://097fe661717d6f5253245276ca95056fa832cd15a4b2e7bde79d596d186e8e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://097fe661717d6f5253245276ca95056fa832cd15a4b2e7bde79d596d186e8e0a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:15:17Z\\\",\\\"message\\\":\\\" 6066 factory.go:656] Stopping watch factory\\\\nI1014 13:15:16.786136 6066 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1014 13:15:16.786204 6066 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 13:15:16.786182 6066 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1014 13:15:16.786219 6066 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1014 13:15:16.786255 6066 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 13:15:16.786209 6066 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1014 13:15:16.786394 6066 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 13:15:16.786735 6066 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:19Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.169215 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:19Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.181699 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jbldr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:19Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.194140 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:19Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.206689 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.206764 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.206782 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.206818 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.206832 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:19Z","lastTransitionTime":"2025-10-14T13:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.206916 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:19Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.212834 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9v9qj_38d54d71-93d1-4cde-940e-a371117f59bd/ovnkube-controller/0.log" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.214919 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" event={"ID":"38d54d71-93d1-4cde-940e-a371117f59bd","Type":"ContainerStarted","Data":"5e76d57a4596bc632c0173008e0e0f6c349b127420099b35bb8ded5f17d695bb"} Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.222074 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:19Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.237352 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:19Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.247411 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.251525 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:19Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:19 crc kubenswrapper[4725]: W1014 13:15:19.260197 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba13e395_a6be_4eb2_8bcb_4ebbe8a55b8d.slice/crio-d7599896db2a8c1cb56602f065e74f4a6a3ef78ac5af75f36de80a462d50b136 WatchSource:0}: Error finding container d7599896db2a8c1cb56602f065e74f4a6a3ef78ac5af75f36de80a462d50b136: Status 404 returned error can't find the container with id d7599896db2a8c1cb56602f065e74f4a6a3ef78ac5af75f36de80a462d50b136 Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.309982 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.310024 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.310032 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.310046 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.310055 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:19Z","lastTransitionTime":"2025-10-14T13:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.413601 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.413652 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.413665 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.413683 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.413698 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:19Z","lastTransitionTime":"2025-10-14T13:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.516278 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.516312 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.516326 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.516345 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.516357 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:19Z","lastTransitionTime":"2025-10-14T13:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.618952 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.618992 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.619003 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.619019 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.619030 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:19Z","lastTransitionTime":"2025-10-14T13:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.675995 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-cxcmw"] Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.676643 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:15:19 crc kubenswrapper[4725]: E1014 13:15:19.676726 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.690538 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55e8261f-c97f-4d46-b31f-6ef737e6c4c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92d8ba988206535f4ad53245b92484c9cfaf86d2a768d171b6a56a3d71ce2855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c3fa61126ab1de3e82ff31cd2d54b72d6a4355db0dbe16a1300f5349ec032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76dc71de9a8769dc0878cc796ad53a4355d128d9d3011601d1e10127a390b87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968caf419d5ddbb9213c7325a25516bdec5108f010a0e3457faa716b6f6f8955\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:19Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.710387 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:19Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.722039 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.722088 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.722105 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.722126 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.722142 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:19Z","lastTransitionTime":"2025-10-14T13:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.724562 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:19Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.740704 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:19Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.753829 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cxcmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cxcmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:19Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.768882 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:19Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.781239 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:19Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.786875 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b-metrics-certs\") pod \"network-metrics-daemon-cxcmw\" (UID: \"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\") " pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.786963 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8cj6\" (UniqueName: \"kubernetes.io/projected/c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b-kube-api-access-j8cj6\") pod \"network-metrics-daemon-cxcmw\" (UID: \"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\") " pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.803142 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f246f81af5a032ce9db26f4f99c6cbcc43eb7ff2674a8de64d385efea9ae280d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:19Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.824776 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:19Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.826175 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.826244 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.826254 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.826273 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.826285 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:19Z","lastTransitionTime":"2025-10-14T13:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.837367 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:19Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.850756 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:19Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.870927 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://097fe661717d6f5253245276ca95056fa832cd15a4b2e7bde79d596d186e8e0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://097fe661717d6f5253245276ca95056fa832cd15a4b2e7bde79d596d186e8e0a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:15:17Z\\\",\\\"message\\\":\\\" 6066 factory.go:656] Stopping watch factory\\\\nI1014 13:15:16.786136 6066 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1014 13:15:16.786204 6066 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 13:15:16.786182 6066 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1014 13:15:16.786219 6066 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1014 13:15:16.786255 6066 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 13:15:16.786209 6066 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1014 13:15:16.786394 6066 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 13:15:16.786735 6066 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:19Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.887539 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8cj6\" (UniqueName: \"kubernetes.io/projected/c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b-kube-api-access-j8cj6\") pod \"network-metrics-daemon-cxcmw\" (UID: \"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\") " pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.887599 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b-metrics-certs\") pod \"network-metrics-daemon-cxcmw\" (UID: \"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\") " pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:15:19 crc kubenswrapper[4725]: E1014 13:15:19.887777 4725 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 13:15:19 crc kubenswrapper[4725]: E1014 13:15:19.887840 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b-metrics-certs podName:c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b nodeName:}" failed. No retries permitted until 2025-10-14 13:15:20.387821986 +0000 UTC m=+37.236256795 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b-metrics-certs") pod "network-metrics-daemon-cxcmw" (UID: "c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.890634 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:19Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.901138 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jbldr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:19Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.906777 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8cj6\" (UniqueName: \"kubernetes.io/projected/c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b-kube-api-access-j8cj6\") pod \"network-metrics-daemon-cxcmw\" (UID: \"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\") " pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.916256 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:19Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.920230 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.920230 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.920238 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:19 crc kubenswrapper[4725]: E1014 13:15:19.920481 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:15:19 crc kubenswrapper[4725]: E1014 13:15:19.920558 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:15:19 crc kubenswrapper[4725]: E1014 13:15:19.920359 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.928913 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.928938 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.928947 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.928959 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.928970 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:19Z","lastTransitionTime":"2025-10-14T13:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:19 crc kubenswrapper[4725]: I1014 13:15:19.931375 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:19Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.031006 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.031053 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.031065 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.031083 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.031095 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:20Z","lastTransitionTime":"2025-10-14T13:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.134231 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.134284 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.134296 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.134313 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.134326 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:20Z","lastTransitionTime":"2025-10-14T13:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.222144 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" event={"ID":"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d","Type":"ContainerStarted","Data":"d3dd5bed9576c9be576b8118eff0593b0eaf3d8fd8614dfbf566906a78f249e6"} Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.222185 4725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.222203 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" event={"ID":"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d","Type":"ContainerStarted","Data":"d7599896db2a8c1cb56602f065e74f4a6a3ef78ac5af75f36de80a462d50b136"} Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.236729 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.236762 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.236774 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.236791 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.236803 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:20Z","lastTransitionTime":"2025-10-14T13:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.240986 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:20Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.255098 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:20Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.269062 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:20Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.299956 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cxcmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cxcmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:20Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.312677 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55e8261f-c97f-4d46-b31f-6ef737e6c4c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92d8ba988206535f4ad53245b92484c9cfaf86d2a768d171b6a56a3d71ce2855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c3fa61126ab1de3e82ff31cd2d54b72d6a4355db0dbe16a1300f5349ec032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76dc71de9a8769dc0878cc796ad53a4355d128d9d3011601d1e10127a390b87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968caf419d5ddbb9213c7325a25516bdec5108f010a0e3457faa716b6f6f8955\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:20Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.325842 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:20Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.339610 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.339965 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.340090 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.340187 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.340272 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:20Z","lastTransitionTime":"2025-10-14T13:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.343488 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:20Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.380257 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:20Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.392576 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b-metrics-certs\") pod \"network-metrics-daemon-cxcmw\" (UID: \"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\") " pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:15:20 crc kubenswrapper[4725]: E1014 13:15:20.393140 4725 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 13:15:20 crc kubenswrapper[4725]: E1014 13:15:20.393235 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b-metrics-certs podName:c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b nodeName:}" failed. No retries permitted until 2025-10-14 13:15:21.39321613 +0000 UTC m=+38.241650939 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b-metrics-certs") pod "network-metrics-daemon-cxcmw" (UID: "c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.394583 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:20Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.442345 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.442384 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.442397 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.442416 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.442429 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:20Z","lastTransitionTime":"2025-10-14T13:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.444312 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f246f81af5a032ce9db26f4f99c6cbcc43eb7ff2674a8de64d385efea9ae280d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:20Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.453998 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:20Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.465083 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jbldr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:20Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.476790 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:20Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.485585 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:20Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.494950 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:20Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.510171 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e76d57a4596bc632c0173008e0e0f6c349b127420099b35bb8ded5f17d695bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://097fe661717d6f5253245276ca95056fa832cd15a4b2e7bde79d596d186e8e0a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:15:17Z\\\",\\\"message\\\":\\\" 6066 factory.go:656] Stopping watch factory\\\\nI1014 13:15:16.786136 6066 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1014 13:15:16.786204 6066 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 13:15:16.786182 6066 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1014 13:15:16.786219 6066 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1014 13:15:16.786255 6066 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 13:15:16.786209 6066 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1014 13:15:16.786394 6066 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 13:15:16.786735 6066 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:20Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.544596 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.544628 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.544639 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.544656 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.544667 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:20Z","lastTransitionTime":"2025-10-14T13:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.647221 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.647285 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.647307 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.647333 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.647362 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:20Z","lastTransitionTime":"2025-10-14T13:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.750788 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.750858 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.750876 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.750905 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.750924 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:20Z","lastTransitionTime":"2025-10-14T13:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.853003 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.853041 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.853051 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.853067 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.853081 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:20Z","lastTransitionTime":"2025-10-14T13:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.920897 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:15:20 crc kubenswrapper[4725]: E1014 13:15:20.921044 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.955705 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.955747 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.955755 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.955771 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:20 crc kubenswrapper[4725]: I1014 13:15:20.955781 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:20Z","lastTransitionTime":"2025-10-14T13:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.085909 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.085945 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.085954 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.085968 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.085977 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:21Z","lastTransitionTime":"2025-10-14T13:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.187686 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.187723 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.187739 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.187757 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.187771 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:21Z","lastTransitionTime":"2025-10-14T13:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.227433 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" event={"ID":"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d","Type":"ContainerStarted","Data":"f6b75fa898729a1cbd704de8ab24f7d7bb470e00eb9f3ee51e23d86222961b81"} Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.243961 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:21Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.256947 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:21Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.270915 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:21Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.284040 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cxcmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cxcmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:21Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.289210 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.289244 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.289254 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.289266 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.289276 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:21Z","lastTransitionTime":"2025-10-14T13:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.296836 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55e8261f-c97f-4d46-b31f-6ef737e6c4c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92d8ba988206535f4ad53245b92484c9cfaf86d2a768d171b6a56a3d71ce2855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c3fa61126ab1de3e82ff31cd2d54b72d6a4355db0dbe16a1300f5349ec032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76dc71de9a8769dc0878cc796ad53a4355d128d9d3011601d1e10127a390b87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968caf419d5ddbb9213c7325a25516bdec5108f010a0e3457faa716b6f6f8955\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:21Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.308733 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:21Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.320797 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:21Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.335877 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:21Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.365807 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:21Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.381783 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f246f81af5a032ce9db26f4f99c6cbcc43eb7ff2674a8de64d385efea9ae280d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:21Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.391689 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.391721 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.391729 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.391741 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.391750 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:21Z","lastTransitionTime":"2025-10-14T13:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.392113 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:21Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.405241 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b-metrics-certs\") pod \"network-metrics-daemon-cxcmw\" (UID: \"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\") " pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:15:21 crc kubenswrapper[4725]: E1014 13:15:21.405427 4725 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 13:15:21 crc kubenswrapper[4725]: E1014 13:15:21.405510 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b-metrics-certs podName:c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b nodeName:}" failed. No retries permitted until 2025-10-14 13:15:23.405493286 +0000 UTC m=+40.253928095 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b-metrics-certs") pod "network-metrics-daemon-cxcmw" (UID: "c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.405994 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3dd5bed9576c9be576b8118eff0593b0eaf3d8fd8614dfbf566906a78f249e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b75fa898729a1cbd704de8ab24f7d7bb470e00eb9f3ee51e23d86222961b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jbldr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:21Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.417537 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:21Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.427777 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:21Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.438189 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:21Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.457334 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e76d57a4596bc632c0173008e0e0f6c349b127420099b35bb8ded5f17d695bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://097fe661717d6f5253245276ca95056fa832cd15a4b2e7bde79d596d186e8e0a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:15:17Z\\\",\\\"message\\\":\\\" 6066 factory.go:656] Stopping watch factory\\\\nI1014 13:15:16.786136 6066 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1014 13:15:16.786204 6066 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 13:15:16.786182 6066 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1014 13:15:16.786219 6066 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1014 13:15:16.786255 6066 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 13:15:16.786209 6066 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1014 13:15:16.786394 6066 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 13:15:16.786735 6066 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:21Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.494181 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.494220 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.494244 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.494263 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.494272 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:21Z","lastTransitionTime":"2025-10-14T13:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.596721 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.596788 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.596804 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.596826 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.596842 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:21Z","lastTransitionTime":"2025-10-14T13:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.699603 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.699698 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.699726 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.699756 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.699780 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:21Z","lastTransitionTime":"2025-10-14T13:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.803311 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.803372 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.803386 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.803404 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.803417 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:21Z","lastTransitionTime":"2025-10-14T13:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.810819 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.810968 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:21 crc kubenswrapper[4725]: E1014 13:15:21.810989 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:15:37.810965692 +0000 UTC m=+54.659400501 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.811027 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.811067 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:21 crc kubenswrapper[4725]: E1014 13:15:21.811077 4725 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.811109 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:21 crc kubenswrapper[4725]: E1014 13:15:21.811148 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 13:15:37.811131976 +0000 UTC m=+54.659566845 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 13:15:21 crc kubenswrapper[4725]: E1014 13:15:21.811175 4725 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 13:15:21 crc kubenswrapper[4725]: E1014 13:15:21.811204 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 13:15:37.811198458 +0000 UTC m=+54.659633267 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 13:15:21 crc kubenswrapper[4725]: E1014 13:15:21.811223 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 13:15:21 crc kubenswrapper[4725]: E1014 13:15:21.811245 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 13:15:21 crc kubenswrapper[4725]: E1014 13:15:21.811258 4725 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:15:21 crc kubenswrapper[4725]: E1014 13:15:21.811259 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 13:15:21 crc kubenswrapper[4725]: E1014 13:15:21.811274 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 13:15:21 crc kubenswrapper[4725]: E1014 13:15:21.811286 4725 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:15:21 crc kubenswrapper[4725]: E1014 13:15:21.811292 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-14 13:15:37.81128217 +0000 UTC m=+54.659717019 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:15:21 crc kubenswrapper[4725]: E1014 13:15:21.811311 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-14 13:15:37.811303201 +0000 UTC m=+54.659738110 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.835882 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.835994 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.836012 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.836035 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.836051 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:21Z","lastTransitionTime":"2025-10-14T13:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:21 crc kubenswrapper[4725]: E1014 13:15:21.853773 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:21Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.859977 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.860024 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.860048 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.860071 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.860091 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:21Z","lastTransitionTime":"2025-10-14T13:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:21 crc kubenswrapper[4725]: E1014 13:15:21.872961 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:21Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.877630 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.877687 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.877705 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.877730 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.877748 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:21Z","lastTransitionTime":"2025-10-14T13:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:21 crc kubenswrapper[4725]: E1014 13:15:21.891980 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:21Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.895121 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.895147 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.895156 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.895168 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.895176 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:21Z","lastTransitionTime":"2025-10-14T13:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:21 crc kubenswrapper[4725]: E1014 13:15:21.906728 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:21Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.910125 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.910179 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.910192 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.910211 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.910221 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:21Z","lastTransitionTime":"2025-10-14T13:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.921053 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.921119 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:21 crc kubenswrapper[4725]: E1014 13:15:21.921168 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.921203 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:21 crc kubenswrapper[4725]: E1014 13:15:21.921245 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:15:21 crc kubenswrapper[4725]: E1014 13:15:21.921373 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.922383 4725 scope.go:117] "RemoveContainer" containerID="ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987" Oct 14 13:15:21 crc kubenswrapper[4725]: E1014 13:15:21.925268 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:21Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:21 crc kubenswrapper[4725]: E1014 13:15:21.925704 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.929750 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.929828 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.929850 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.929896 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:21 crc kubenswrapper[4725]: I1014 13:15:21.929910 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:21Z","lastTransitionTime":"2025-10-14T13:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.033433 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.033502 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.033516 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.033532 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.033541 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:22Z","lastTransitionTime":"2025-10-14T13:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.139210 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.139256 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.139270 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.139293 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.139307 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:22Z","lastTransitionTime":"2025-10-14T13:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.231924 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.233211 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3920ebe7d3eb6b372d24be3b36b16b12c48ea827aae7cf0d00fd707b6681d857"} Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.235048 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9v9qj_38d54d71-93d1-4cde-940e-a371117f59bd/ovnkube-controller/1.log" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.235780 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9v9qj_38d54d71-93d1-4cde-940e-a371117f59bd/ovnkube-controller/0.log" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.238159 4725 generic.go:334] "Generic (PLEG): container finished" podID="38d54d71-93d1-4cde-940e-a371117f59bd" containerID="5e76d57a4596bc632c0173008e0e0f6c349b127420099b35bb8ded5f17d695bb" exitCode=1 Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.238585 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" event={"ID":"38d54d71-93d1-4cde-940e-a371117f59bd","Type":"ContainerDied","Data":"5e76d57a4596bc632c0173008e0e0f6c349b127420099b35bb8ded5f17d695bb"} Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.238647 4725 scope.go:117] "RemoveContainer" containerID="097fe661717d6f5253245276ca95056fa832cd15a4b2e7bde79d596d186e8e0a" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.239806 4725 scope.go:117] "RemoveContainer" containerID="5e76d57a4596bc632c0173008e0e0f6c349b127420099b35bb8ded5f17d695bb" Oct 14 13:15:22 crc kubenswrapper[4725]: E1014 13:15:22.240090 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9v9qj_openshift-ovn-kubernetes(38d54d71-93d1-4cde-940e-a371117f59bd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.240560 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.240593 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.240603 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.240619 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.240632 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:22Z","lastTransitionTime":"2025-10-14T13:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.253835 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:22Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.264667 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:22Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.280491 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:22Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.301009 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e76d57a4596bc632c0173008e0e0f6c349b127420099b35bb8ded5f17d695bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://097fe661717d6f5253245276ca95056fa832cd15a4b2e7bde79d596d186e8e0a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:15:17Z\\\",\\\"message\\\":\\\" 6066 factory.go:656] Stopping watch factory\\\\nI1014 13:15:16.786136 6066 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1014 13:15:16.786204 6066 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 13:15:16.786182 6066 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1014 13:15:16.786219 6066 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1014 13:15:16.786255 6066 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 13:15:16.786209 6066 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1014 13:15:16.786394 6066 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 13:15:16.786735 6066 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e76d57a4596bc632c0173008e0e0f6c349b127420099b35bb8ded5f17d695bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"message\\\":\\\" 13:15:21.264935 6223 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr after 0 failed attempt(s)\\\\nI1014 13:15:21.264942 6223 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr\\\\nI1014 13:15:21.264930 6223 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1014 13:15:21.264952 6223 lb_config.go:1031] Cluster endpoints for openshift-config-operator/metrics for network=default are: map[]\\\\nI1014 13:15:21.264966 6223 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-cxcmw\\\\nI1014 13:15:21.264981 6223 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-cxcmw\\\\nI1014 13:15:21.264988 6223 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-cxcmw in node crc\\\\nI1014 13:15:21.265024 6223 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-cxcmw] creating logical port openshift-multus_network-metrics-daemon-cxcmw for pod on switch crc\\\\nF1014 13:15:21.264695 6223 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:22Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.314880 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:22Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.325566 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3dd5bed9576c9be576b8118eff0593b0eaf3d8fd8614dfbf566906a78f249e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b75fa898729a1cbd704de8ab24f7d7bb470e00eb9f3ee51e23d86222961b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jbldr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:22Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.339819 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:22Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.342607 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.342632 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.342639 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.342653 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.342662 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:22Z","lastTransitionTime":"2025-10-14T13:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.351477 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:22Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.365617 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55e8261f-c97f-4d46-b31f-6ef737e6c4c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92d8ba988206535f4ad53245b92484c9cfaf86d2a768d171b6a56a3d71ce2855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c3fa61126ab1de3e82ff31cd2d54b72d6a4355db0dbe16a1300f5349ec032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76dc71de9a8769dc0878cc796ad53a4355d128d9d3011601d1e10127a390b87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968caf419d5ddbb9213c7325a25516bdec5108f010a0e3457faa716b6f6f8955\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:22Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.379112 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:22Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.392274 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:22Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.404802 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:22Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.415883 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cxcmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cxcmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:22Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.429966 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:22Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.442004 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:22Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.444592 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.444654 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.444666 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.444683 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.444697 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:22Z","lastTransitionTime":"2025-10-14T13:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.460447 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f246f81af5a032ce9db26f4f99c6cbcc43eb7ff2674a8de64d385efea9ae280d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:22Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.547823 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.547867 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.547876 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.547892 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.547902 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:22Z","lastTransitionTime":"2025-10-14T13:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.650366 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.650397 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.650406 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.650420 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.650430 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:22Z","lastTransitionTime":"2025-10-14T13:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.753295 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.753339 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.753349 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.753364 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.753376 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:22Z","lastTransitionTime":"2025-10-14T13:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.856101 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.856171 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.856187 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.856212 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.856225 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:22Z","lastTransitionTime":"2025-10-14T13:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.920415 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:15:22 crc kubenswrapper[4725]: E1014 13:15:22.920873 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.960492 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.960549 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.960562 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.960582 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:22 crc kubenswrapper[4725]: I1014 13:15:22.960600 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:22Z","lastTransitionTime":"2025-10-14T13:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.063860 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.064096 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.064321 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.064431 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.064564 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:23Z","lastTransitionTime":"2025-10-14T13:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.167122 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.167377 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.167439 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.167536 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.167600 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:23Z","lastTransitionTime":"2025-10-14T13:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.242833 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9v9qj_38d54d71-93d1-4cde-940e-a371117f59bd/ovnkube-controller/1.log" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.246407 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.259795 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:23Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.270532 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.270565 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.270573 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.270588 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.270598 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:23Z","lastTransitionTime":"2025-10-14T13:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.271402 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:23Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.285302 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f246f81af5a032ce9db26f4f99c6cbcc43eb7ff2674a8de64d385efea9ae280d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:23Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.297829 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:23Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.308712 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3dd5bed9576c9be576b8118eff0593b0eaf3d8fd8614dfbf566906a78f249e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b75fa898729a1cbd704de8ab24f7d7bb470e00eb9f3ee51e23d86222961b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jbldr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:23Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.320189 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:23Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.331507 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:23Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.342567 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:23Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.359702 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e76d57a4596bc632c0173008e0e0f6c349b127420099b35bb8ded5f17d695bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://097fe661717d6f5253245276ca95056fa832cd15a4b2e7bde79d596d186e8e0a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:15:17Z\\\",\\\"message\\\":\\\" 6066 factory.go:656] Stopping watch factory\\\\nI1014 13:15:16.786136 6066 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1014 13:15:16.786204 6066 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 13:15:16.786182 6066 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1014 13:15:16.786219 6066 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1014 13:15:16.786255 6066 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 13:15:16.786209 6066 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1014 13:15:16.786394 6066 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 13:15:16.786735 6066 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e76d57a4596bc632c0173008e0e0f6c349b127420099b35bb8ded5f17d695bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"message\\\":\\\" 13:15:21.264935 6223 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr after 0 failed attempt(s)\\\\nI1014 13:15:21.264942 6223 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr\\\\nI1014 13:15:21.264930 6223 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1014 13:15:21.264952 6223 lb_config.go:1031] Cluster endpoints for openshift-config-operator/metrics for network=default are: map[]\\\\nI1014 13:15:21.264966 6223 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-cxcmw\\\\nI1014 13:15:21.264981 6223 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-cxcmw\\\\nI1014 13:15:21.264988 6223 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-cxcmw in node crc\\\\nI1014 13:15:21.265024 6223 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-cxcmw] creating logical port openshift-multus_network-metrics-daemon-cxcmw for pod on switch crc\\\\nF1014 13:15:21.264695 6223 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:23Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.372989 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.373037 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.373048 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.373065 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.373112 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:23Z","lastTransitionTime":"2025-10-14T13:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.374223 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3920ebe7d3eb6b372d24be3b36b16b12c48ea827aae7cf0d00fd707b6681d857\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:23Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.388696 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:23Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.400318 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:23Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.411204 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cxcmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cxcmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:23Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.423054 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55e8261f-c97f-4d46-b31f-6ef737e6c4c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92d8ba988206535f4ad53245b92484c9cfaf86d2a768d171b6a56a3d71ce2855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c3fa61126ab1de3e82ff31cd2d54b72d6a4355db0dbe16a1300f5349ec032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76dc71de9a8769dc0878cc796ad53a4355d128d9d3011601d1e10127a390b87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968caf419d5ddbb9213c7325a25516bdec5108f010a0e3457faa716b6f6f8955\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:23Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.429781 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b-metrics-certs\") pod \"network-metrics-daemon-cxcmw\" (UID: \"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\") " pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:15:23 crc kubenswrapper[4725]: E1014 13:15:23.429932 4725 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 13:15:23 crc kubenswrapper[4725]: E1014 13:15:23.429980 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b-metrics-certs podName:c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b nodeName:}" failed. No retries permitted until 2025-10-14 13:15:27.429966427 +0000 UTC m=+44.278401236 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b-metrics-certs") pod "network-metrics-daemon-cxcmw" (UID: "c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.434558 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:23Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.447552 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:23Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.475390 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.475445 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.475473 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.475491 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.475501 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:23Z","lastTransitionTime":"2025-10-14T13:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.578036 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.578079 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.578090 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.578106 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.578128 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:23Z","lastTransitionTime":"2025-10-14T13:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.681221 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.681294 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.681311 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.681331 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.681345 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:23Z","lastTransitionTime":"2025-10-14T13:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.784444 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.784519 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.784530 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.784548 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.784562 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:23Z","lastTransitionTime":"2025-10-14T13:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.887731 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.887808 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.887819 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.887840 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.887853 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:23Z","lastTransitionTime":"2025-10-14T13:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.920494 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:23 crc kubenswrapper[4725]: E1014 13:15:23.920917 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.920991 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.921047 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:23 crc kubenswrapper[4725]: E1014 13:15:23.921249 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:15:23 crc kubenswrapper[4725]: E1014 13:15:23.921409 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.947597 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:23Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.965889 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:23Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.982852 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f246f81af5a032ce9db26f4f99c6cbcc43eb7ff2674a8de64d385efea9ae280d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:23Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.990070 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.990115 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.990128 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.990147 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.990501 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:23Z","lastTransitionTime":"2025-10-14T13:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:23 crc kubenswrapper[4725]: I1014 13:15:23.996750 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:23Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.008864 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:24Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.021122 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:24Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.044165 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e76d57a4596bc632c0173008e0e0f6c349b127420099b35bb8ded5f17d695bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://097fe661717d6f5253245276ca95056fa832cd15a4b2e7bde79d596d186e8e0a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:15:17Z\\\",\\\"message\\\":\\\" 6066 factory.go:656] Stopping watch factory\\\\nI1014 13:15:16.786136 6066 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1014 13:15:16.786204 6066 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 13:15:16.786182 6066 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1014 13:15:16.786219 6066 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1014 13:15:16.786255 6066 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 13:15:16.786209 6066 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1014 13:15:16.786394 6066 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1014 13:15:16.786735 6066 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e76d57a4596bc632c0173008e0e0f6c349b127420099b35bb8ded5f17d695bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"message\\\":\\\" 13:15:21.264935 6223 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr after 0 failed attempt(s)\\\\nI1014 13:15:21.264942 6223 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr\\\\nI1014 13:15:21.264930 6223 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1014 13:15:21.264952 6223 lb_config.go:1031] Cluster endpoints for openshift-config-operator/metrics for network=default are: map[]\\\\nI1014 13:15:21.264966 6223 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-cxcmw\\\\nI1014 13:15:21.264981 6223 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-cxcmw\\\\nI1014 13:15:21.264988 6223 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-cxcmw in node crc\\\\nI1014 13:15:21.265024 6223 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-cxcmw] creating logical port openshift-multus_network-metrics-daemon-cxcmw for pod on switch crc\\\\nF1014 13:15:21.264695 6223 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:24Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.056060 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:24Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.067645 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3dd5bed9576c9be576b8118eff0593b0eaf3d8fd8614dfbf566906a78f249e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b75fa898729a1cbd704de8ab24f7d7bb470e00eb9f3ee51e23d86222961b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jbldr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:24Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.079521 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3920ebe7d3eb6b372d24be3b36b16b12c48ea827aae7cf0d00fd707b6681d857\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:24Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.090177 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:24Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.093409 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.093495 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.093514 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.093538 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.093557 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:24Z","lastTransitionTime":"2025-10-14T13:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.100911 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55e8261f-c97f-4d46-b31f-6ef737e6c4c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92d8ba988206535f4ad53245b92484c9cfaf86d2a768d171b6a56a3d71ce2855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c3fa61126ab1de3e82ff31cd2d54b72d6a4355db0dbe16a1300f5349ec032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76dc71de9a8769dc0878cc796ad53a4355d128d9d3011601d1e10127a390b87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968caf419d5ddbb9213c7325a25516bdec5108f010a0e3457faa716b6f6f8955\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:24Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.111377 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:24Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.122438 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:24Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.134421 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:24Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.143715 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cxcmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cxcmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:24Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.195964 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.196017 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.196031 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.196051 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.196066 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:24Z","lastTransitionTime":"2025-10-14T13:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.299529 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.299583 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.299594 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.299615 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.299626 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:24Z","lastTransitionTime":"2025-10-14T13:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.402049 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.402286 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.402348 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.402474 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.402543 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:24Z","lastTransitionTime":"2025-10-14T13:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.504423 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.504478 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.504489 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.504503 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.504515 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:24Z","lastTransitionTime":"2025-10-14T13:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.606299 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.606630 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.606722 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.606811 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.606896 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:24Z","lastTransitionTime":"2025-10-14T13:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.710006 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.710045 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.710057 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.710074 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.710087 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:24Z","lastTransitionTime":"2025-10-14T13:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.812552 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.812589 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.812598 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.812614 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.812624 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:24Z","lastTransitionTime":"2025-10-14T13:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.915115 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.915156 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.915166 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.915181 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.915190 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:24Z","lastTransitionTime":"2025-10-14T13:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:24 crc kubenswrapper[4725]: I1014 13:15:24.920324 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:15:24 crc kubenswrapper[4725]: E1014 13:15:24.920591 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.018331 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.018393 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.018411 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.018438 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.018490 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:25Z","lastTransitionTime":"2025-10-14T13:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.120827 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.120859 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.120868 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.120881 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.120907 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:25Z","lastTransitionTime":"2025-10-14T13:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.223441 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.223502 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.223513 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.223529 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.223542 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:25Z","lastTransitionTime":"2025-10-14T13:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.326669 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.326750 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.326764 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.326786 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.326802 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:25Z","lastTransitionTime":"2025-10-14T13:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.430114 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.430186 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.430207 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.430235 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.430255 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:25Z","lastTransitionTime":"2025-10-14T13:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.533718 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.533767 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.533776 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.533793 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.533803 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:25Z","lastTransitionTime":"2025-10-14T13:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.639302 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.639373 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.639384 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.639410 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.639422 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:25Z","lastTransitionTime":"2025-10-14T13:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.743244 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.743303 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.743312 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.743333 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.743347 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:25Z","lastTransitionTime":"2025-10-14T13:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.846505 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.846583 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.846609 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.846639 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.846664 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:25Z","lastTransitionTime":"2025-10-14T13:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.920442 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.920518 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:25 crc kubenswrapper[4725]: E1014 13:15:25.920606 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:15:25 crc kubenswrapper[4725]: E1014 13:15:25.920765 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.920864 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:25 crc kubenswrapper[4725]: E1014 13:15:25.921105 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.948690 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.948733 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.948741 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.948756 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:25 crc kubenswrapper[4725]: I1014 13:15:25.948765 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:25Z","lastTransitionTime":"2025-10-14T13:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.051322 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.051422 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.051446 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.051513 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.051537 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:26Z","lastTransitionTime":"2025-10-14T13:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.154728 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.154790 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.154807 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.154832 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.154849 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:26Z","lastTransitionTime":"2025-10-14T13:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.257818 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.257876 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.257890 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.257915 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.257932 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:26Z","lastTransitionTime":"2025-10-14T13:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.361149 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.361208 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.361223 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.361244 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.361261 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:26Z","lastTransitionTime":"2025-10-14T13:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.463831 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.463877 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.463891 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.463915 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.463946 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:26Z","lastTransitionTime":"2025-10-14T13:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.566543 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.566586 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.566599 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.566617 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.566630 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:26Z","lastTransitionTime":"2025-10-14T13:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.669620 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.669684 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.669704 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.669731 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.669752 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:26Z","lastTransitionTime":"2025-10-14T13:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.773052 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.773117 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.773133 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.773159 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.773173 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:26Z","lastTransitionTime":"2025-10-14T13:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.876709 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.876754 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.876763 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.876778 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.876789 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:26Z","lastTransitionTime":"2025-10-14T13:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:26 crc kubenswrapper[4725]: I1014 13:15:26.920279 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:15:26 crc kubenswrapper[4725]: E1014 13:15:26.920492 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.028692 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.028743 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.028754 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.028774 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.028786 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:27Z","lastTransitionTime":"2025-10-14T13:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.131831 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.131889 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.131904 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.131924 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.131936 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:27Z","lastTransitionTime":"2025-10-14T13:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.234562 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.234620 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.234631 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.234654 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.234667 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:27Z","lastTransitionTime":"2025-10-14T13:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.339200 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.339264 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.339280 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.339304 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.339318 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:27Z","lastTransitionTime":"2025-10-14T13:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.442942 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.443607 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.443678 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.443718 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.443747 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:27Z","lastTransitionTime":"2025-10-14T13:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.466828 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b-metrics-certs\") pod \"network-metrics-daemon-cxcmw\" (UID: \"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\") " pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:15:27 crc kubenswrapper[4725]: E1014 13:15:27.467046 4725 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 13:15:27 crc kubenswrapper[4725]: E1014 13:15:27.467131 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b-metrics-certs podName:c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b nodeName:}" failed. No retries permitted until 2025-10-14 13:15:35.467107618 +0000 UTC m=+52.315542437 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b-metrics-certs") pod "network-metrics-daemon-cxcmw" (UID: "c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.546286 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.546349 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.546361 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.546384 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.546400 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:27Z","lastTransitionTime":"2025-10-14T13:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.649074 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.649141 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.649158 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.649184 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.649204 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:27Z","lastTransitionTime":"2025-10-14T13:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.752310 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.752366 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.752379 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.752398 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.752413 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:27Z","lastTransitionTime":"2025-10-14T13:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.856249 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.856323 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.856344 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.856364 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.856383 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:27Z","lastTransitionTime":"2025-10-14T13:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.921027 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.921148 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.921160 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:27 crc kubenswrapper[4725]: E1014 13:15:27.921330 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:15:27 crc kubenswrapper[4725]: E1014 13:15:27.921391 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:15:27 crc kubenswrapper[4725]: E1014 13:15:27.921497 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.958885 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.958935 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.958946 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.958965 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:27 crc kubenswrapper[4725]: I1014 13:15:27.958978 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:27Z","lastTransitionTime":"2025-10-14T13:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.062293 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.062391 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.062422 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.062509 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.062541 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:28Z","lastTransitionTime":"2025-10-14T13:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.166625 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.166687 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.166713 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.166739 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.166762 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:28Z","lastTransitionTime":"2025-10-14T13:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.269511 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.269569 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.269586 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.269609 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.269627 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:28Z","lastTransitionTime":"2025-10-14T13:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.372768 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.372814 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.372829 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.372847 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.372858 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:28Z","lastTransitionTime":"2025-10-14T13:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.476517 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.476592 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.476609 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.476633 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.476651 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:28Z","lastTransitionTime":"2025-10-14T13:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.579705 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.579787 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.579800 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.579857 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.579870 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:28Z","lastTransitionTime":"2025-10-14T13:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.683577 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.683638 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.683659 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.683684 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.683701 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:28Z","lastTransitionTime":"2025-10-14T13:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.787496 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.787559 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.787577 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.787602 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.787621 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:28Z","lastTransitionTime":"2025-10-14T13:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.889906 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.889938 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.889948 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.889962 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.889972 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:28Z","lastTransitionTime":"2025-10-14T13:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.920853 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:15:28 crc kubenswrapper[4725]: E1014 13:15:28.921106 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.992771 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.992850 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.992874 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.992903 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:28 crc kubenswrapper[4725]: I1014 13:15:28.992928 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:28Z","lastTransitionTime":"2025-10-14T13:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.096167 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.096219 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.096234 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.096252 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.096265 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:29Z","lastTransitionTime":"2025-10-14T13:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.199117 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.199161 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.199171 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.199187 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.199197 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:29Z","lastTransitionTime":"2025-10-14T13:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.302501 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.302575 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.302626 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.302656 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.302686 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:29Z","lastTransitionTime":"2025-10-14T13:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.405893 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.406011 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.406032 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.406059 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.406078 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:29Z","lastTransitionTime":"2025-10-14T13:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.508804 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.508853 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.508869 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.508889 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.508907 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:29Z","lastTransitionTime":"2025-10-14T13:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.611799 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.611874 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.611901 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.611930 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.611953 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:29Z","lastTransitionTime":"2025-10-14T13:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.715139 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.715167 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.715175 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.715188 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.715198 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:29Z","lastTransitionTime":"2025-10-14T13:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.819012 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.819089 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.819110 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.819134 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.819169 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:29Z","lastTransitionTime":"2025-10-14T13:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.920940 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.921181 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.921343 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:29 crc kubenswrapper[4725]: E1014 13:15:29.921331 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:15:29 crc kubenswrapper[4725]: E1014 13:15:29.921551 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:15:29 crc kubenswrapper[4725]: E1014 13:15:29.921681 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.922411 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.922504 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.922526 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.922549 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:29 crc kubenswrapper[4725]: I1014 13:15:29.922568 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:29Z","lastTransitionTime":"2025-10-14T13:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.024760 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.024798 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.024808 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.024824 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.024835 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:30Z","lastTransitionTime":"2025-10-14T13:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.127421 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.127471 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.127480 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.127492 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.127501 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:30Z","lastTransitionTime":"2025-10-14T13:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.230051 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.230087 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.230097 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.230115 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.230126 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:30Z","lastTransitionTime":"2025-10-14T13:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.332364 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.332431 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.332487 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.332524 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.332549 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:30Z","lastTransitionTime":"2025-10-14T13:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.435849 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.435914 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.435932 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.435955 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.435973 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:30Z","lastTransitionTime":"2025-10-14T13:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.539068 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.539107 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.539116 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.539132 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.539141 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:30Z","lastTransitionTime":"2025-10-14T13:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.642115 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.642188 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.642201 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.642215 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.642224 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:30Z","lastTransitionTime":"2025-10-14T13:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.745775 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.745844 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.745868 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.745931 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.745958 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:30Z","lastTransitionTime":"2025-10-14T13:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.848377 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.848424 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.848443 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.848495 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.848513 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:30Z","lastTransitionTime":"2025-10-14T13:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.920338 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:15:30 crc kubenswrapper[4725]: E1014 13:15:30.920516 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.951419 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.951489 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.951501 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.951523 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:30 crc kubenswrapper[4725]: I1014 13:15:30.951536 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:30Z","lastTransitionTime":"2025-10-14T13:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.054864 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.054926 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.054947 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.054971 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.054990 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:31Z","lastTransitionTime":"2025-10-14T13:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.158582 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.158663 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.158685 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.158718 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.158737 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:31Z","lastTransitionTime":"2025-10-14T13:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.262656 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.262709 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.262722 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.262743 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.262758 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:31Z","lastTransitionTime":"2025-10-14T13:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.376424 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.376511 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.376535 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.376557 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.376574 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:31Z","lastTransitionTime":"2025-10-14T13:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.479604 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.479644 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.479656 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.479672 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.479683 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:31Z","lastTransitionTime":"2025-10-14T13:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.582527 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.582585 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.582598 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.582619 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.582632 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:31Z","lastTransitionTime":"2025-10-14T13:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.686252 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.686306 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.686316 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.686333 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.686345 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:31Z","lastTransitionTime":"2025-10-14T13:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.790321 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.790366 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.790375 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.790393 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.790404 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:31Z","lastTransitionTime":"2025-10-14T13:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.893653 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.893716 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.893729 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.893748 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.893759 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:31Z","lastTransitionTime":"2025-10-14T13:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.920516 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.920651 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.920522 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:31 crc kubenswrapper[4725]: E1014 13:15:31.920717 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:15:31 crc kubenswrapper[4725]: E1014 13:15:31.920895 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:15:31 crc kubenswrapper[4725]: E1014 13:15:31.921085 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.997038 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.997083 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.997095 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.997110 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:31 crc kubenswrapper[4725]: I1014 13:15:31.997123 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:31Z","lastTransitionTime":"2025-10-14T13:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.100964 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.101142 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.101176 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.101294 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.101325 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:32Z","lastTransitionTime":"2025-10-14T13:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.204224 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.204307 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.204330 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.204359 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.204376 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:32Z","lastTransitionTime":"2025-10-14T13:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.247924 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.247990 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.248003 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.248024 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.248036 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:32Z","lastTransitionTime":"2025-10-14T13:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:32 crc kubenswrapper[4725]: E1014 13:15:32.267063 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:32Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.273370 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.273443 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.273513 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.273546 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.273574 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:32Z","lastTransitionTime":"2025-10-14T13:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:32 crc kubenswrapper[4725]: E1014 13:15:32.298200 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:32Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.305202 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.305271 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.305295 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.305330 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.305354 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:32Z","lastTransitionTime":"2025-10-14T13:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:32 crc kubenswrapper[4725]: E1014 13:15:32.323640 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:32Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.329032 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.329074 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.329089 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.329108 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.329122 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:32Z","lastTransitionTime":"2025-10-14T13:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:32 crc kubenswrapper[4725]: E1014 13:15:32.345138 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:32Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.351524 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.351588 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.351605 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.351630 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.351645 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:32Z","lastTransitionTime":"2025-10-14T13:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:32 crc kubenswrapper[4725]: E1014 13:15:32.366017 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:32Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:32 crc kubenswrapper[4725]: E1014 13:15:32.366273 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.368851 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.368934 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.368953 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.368978 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.369000 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:32Z","lastTransitionTime":"2025-10-14T13:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.471644 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.472004 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.472210 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.472424 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.472664 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:32Z","lastTransitionTime":"2025-10-14T13:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.576131 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.576185 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.576203 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.576227 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.576244 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:32Z","lastTransitionTime":"2025-10-14T13:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.679651 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.680284 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.680434 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.680530 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.680566 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:32Z","lastTransitionTime":"2025-10-14T13:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.783015 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.783070 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.783093 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.783119 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.783138 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:32Z","lastTransitionTime":"2025-10-14T13:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.885758 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.885806 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.885817 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.885834 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.885846 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:32Z","lastTransitionTime":"2025-10-14T13:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.920652 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:15:32 crc kubenswrapper[4725]: E1014 13:15:32.920815 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.921896 4725 scope.go:117] "RemoveContainer" containerID="5e76d57a4596bc632c0173008e0e0f6c349b127420099b35bb8ded5f17d695bb" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.938707 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:32Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.953051 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:32Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.971068 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f246f81af5a032ce9db26f4f99c6cbcc43eb7ff2674a8de64d385efea9ae280d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:32Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.988658 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.989136 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.989148 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.989165 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.989176 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:32Z","lastTransitionTime":"2025-10-14T13:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:32 crc kubenswrapper[4725]: I1014 13:15:32.990330 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:32Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.004008 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:33Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.017522 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:33Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.036133 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e76d57a4596bc632c0173008e0e0f6c349b127420099b35bb8ded5f17d695bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e76d57a4596bc632c0173008e0e0f6c349b127420099b35bb8ded5f17d695bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"message\\\":\\\" 13:15:21.264935 6223 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr after 0 failed attempt(s)\\\\nI1014 13:15:21.264942 6223 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr\\\\nI1014 13:15:21.264930 6223 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1014 13:15:21.264952 6223 lb_config.go:1031] Cluster endpoints for openshift-config-operator/metrics for network=default are: map[]\\\\nI1014 13:15:21.264966 6223 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-cxcmw\\\\nI1014 13:15:21.264981 6223 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-cxcmw\\\\nI1014 13:15:21.264988 6223 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-cxcmw in node crc\\\\nI1014 13:15:21.265024 6223 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-cxcmw] creating logical port openshift-multus_network-metrics-daemon-cxcmw for pod on switch crc\\\\nF1014 13:15:21.264695 6223 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9v9qj_openshift-ovn-kubernetes(38d54d71-93d1-4cde-940e-a371117f59bd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:33Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.047260 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:33Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.058222 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3dd5bed9576c9be576b8118eff0593b0eaf3d8fd8614dfbf566906a78f249e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b75fa898729a1cbd704de8ab24f7d7bb470e00eb9f3ee51e23d86222961b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jbldr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:33Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.074664 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3920ebe7d3eb6b372d24be3b36b16b12c48ea827aae7cf0d00fd707b6681d857\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:33Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.086703 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:33Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.091645 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.091696 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.091713 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.091739 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.091756 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:33Z","lastTransitionTime":"2025-10-14T13:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.100029 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55e8261f-c97f-4d46-b31f-6ef737e6c4c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92d8ba988206535f4ad53245b92484c9cfaf86d2a768d171b6a56a3d71ce2855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c3fa61126ab1de3e82ff31cd2d54b72d6a4355db0dbe16a1300f5349ec032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76dc71de9a8769dc0878cc796ad53a4355d128d9d3011601d1e10127a390b87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968caf419d5ddbb9213c7325a25516bdec5108f010a0e3457faa716b6f6f8955\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:33Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.113956 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:33Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.126808 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:33Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.140315 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:33Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.150954 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cxcmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cxcmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:33Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.194660 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.194707 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.194720 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.194738 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.194749 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:33Z","lastTransitionTime":"2025-10-14T13:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.281158 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9v9qj_38d54d71-93d1-4cde-940e-a371117f59bd/ovnkube-controller/1.log" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.284965 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" event={"ID":"38d54d71-93d1-4cde-940e-a371117f59bd","Type":"ContainerStarted","Data":"fe2b1e62f60db38faadf074db3ea102158cffc992ec8b64a7c79d48b92388fea"} Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.285118 4725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.297522 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.297558 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.297572 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.297590 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.297604 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:33Z","lastTransitionTime":"2025-10-14T13:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.302716 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3dd5bed9576c9be576b8118eff0593b0eaf3d8fd8614dfbf566906a78f249e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b75fa898729a1cbd704de8ab24f7d7bb470e00eb9f3ee51e23d86222961b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jbldr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:33Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.321693 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:33Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.331760 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:33Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.344549 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:33Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.362630 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2b1e62f60db38faadf074db3ea102158cffc992ec8b64a7c79d48b92388fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e76d57a4596bc632c0173008e0e0f6c349b127420099b35bb8ded5f17d695bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"message\\\":\\\" 13:15:21.264935 6223 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr after 0 failed attempt(s)\\\\nI1014 13:15:21.264942 6223 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr\\\\nI1014 13:15:21.264930 6223 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1014 13:15:21.264952 6223 lb_config.go:1031] Cluster endpoints for openshift-config-operator/metrics for network=default are: map[]\\\\nI1014 13:15:21.264966 6223 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-cxcmw\\\\nI1014 13:15:21.264981 6223 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-cxcmw\\\\nI1014 13:15:21.264988 6223 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-cxcmw in node crc\\\\nI1014 13:15:21.265024 6223 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-cxcmw] creating logical port openshift-multus_network-metrics-daemon-cxcmw for pod on switch crc\\\\nF1014 13:15:21.264695 6223 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:33Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.375884 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:33Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.394008 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3920ebe7d3eb6b372d24be3b36b16b12c48ea827aae7cf0d00fd707b6681d857\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:33Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.400229 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.400276 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.400294 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.400317 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.400333 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:33Z","lastTransitionTime":"2025-10-14T13:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.410558 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:33Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.425496 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cxcmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cxcmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:33Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.441188 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55e8261f-c97f-4d46-b31f-6ef737e6c4c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92d8ba988206535f4ad53245b92484c9cfaf86d2a768d171b6a56a3d71ce2855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c3fa61126ab1de3e82ff31cd2d54b72d6a4355db0dbe16a1300f5349ec032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76dc71de9a8769dc0878cc796ad53a4355d128d9d3011601d1e10127a390b87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968caf419d5ddbb9213c7325a25516bdec5108f010a0e3457faa716b6f6f8955\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:33Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.457065 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:33Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.479056 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:33Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.498576 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:33Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.503734 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.503805 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.503824 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.503850 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.503866 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:33Z","lastTransitionTime":"2025-10-14T13:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.518351 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:33Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.533679 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:33Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.553509 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f246f81af5a032ce9db26f4f99c6cbcc43eb7ff2674a8de64d385efea9ae280d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:33Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.606431 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.606510 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.606524 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.606542 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.606552 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:33Z","lastTransitionTime":"2025-10-14T13:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.708889 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.708954 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.708969 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.708992 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.709005 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:33Z","lastTransitionTime":"2025-10-14T13:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.811757 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.811794 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.811805 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.811821 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.811832 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:33Z","lastTransitionTime":"2025-10-14T13:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.914877 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.914929 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.914941 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.914960 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.914972 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:33Z","lastTransitionTime":"2025-10-14T13:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.921011 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.921015 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.921065 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:33 crc kubenswrapper[4725]: E1014 13:15:33.921200 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:15:33 crc kubenswrapper[4725]: E1014 13:15:33.921288 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:15:33 crc kubenswrapper[4725]: E1014 13:15:33.921494 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.937552 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:33Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.951302 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:33Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.972405 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f246f81af5a032ce9db26f4f99c6cbcc43eb7ff2674a8de64d385efea9ae280d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:33Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.986707 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:33Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:33 crc kubenswrapper[4725]: I1014 13:15:33.997960 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:33Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.013739 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:34Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.017199 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.017245 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.017254 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.017269 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.017280 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:34Z","lastTransitionTime":"2025-10-14T13:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.034650 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2b1e62f60db38faadf074db3ea102158cffc992ec8b64a7c79d48b92388fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e76d57a4596bc632c0173008e0e0f6c349b127420099b35bb8ded5f17d695bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"message\\\":\\\" 13:15:21.264935 6223 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr after 0 failed attempt(s)\\\\nI1014 13:15:21.264942 6223 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr\\\\nI1014 13:15:21.264930 6223 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1014 13:15:21.264952 6223 lb_config.go:1031] Cluster endpoints for openshift-config-operator/metrics for network=default are: map[]\\\\nI1014 13:15:21.264966 6223 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-cxcmw\\\\nI1014 13:15:21.264981 6223 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-cxcmw\\\\nI1014 13:15:21.264988 6223 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-cxcmw in node crc\\\\nI1014 13:15:21.265024 6223 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-cxcmw] creating logical port openshift-multus_network-metrics-daemon-cxcmw for pod on switch crc\\\\nF1014 13:15:21.264695 6223 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:34Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.045051 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:34Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.055945 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3dd5bed9576c9be576b8118eff0593b0eaf3d8fd8614dfbf566906a78f249e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b75fa898729a1cbd704de8ab24f7d7bb470e00eb9f3ee51e23d86222961b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jbldr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:34Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.072437 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3920ebe7d3eb6b372d24be3b36b16b12c48ea827aae7cf0d00fd707b6681d857\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:34Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.084760 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:34Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.097629 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55e8261f-c97f-4d46-b31f-6ef737e6c4c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92d8ba988206535f4ad53245b92484c9cfaf86d2a768d171b6a56a3d71ce2855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c3fa61126ab1de3e82ff31cd2d54b72d6a4355db0dbe16a1300f5349ec032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76dc71de9a8769dc0878cc796ad53a4355d128d9d3011601d1e10127a390b87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968caf419d5ddbb9213c7325a25516bdec5108f010a0e3457faa716b6f6f8955\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:34Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.110296 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:34Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.119393 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.119426 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.119435 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.119474 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.119488 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:34Z","lastTransitionTime":"2025-10-14T13:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.124026 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:34Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.142771 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:34Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.154349 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cxcmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cxcmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:34Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.222511 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.222575 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.222586 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.222613 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.222629 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:34Z","lastTransitionTime":"2025-10-14T13:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.292344 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9v9qj_38d54d71-93d1-4cde-940e-a371117f59bd/ovnkube-controller/2.log" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.293340 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9v9qj_38d54d71-93d1-4cde-940e-a371117f59bd/ovnkube-controller/1.log" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.297880 4725 generic.go:334] "Generic (PLEG): container finished" podID="38d54d71-93d1-4cde-940e-a371117f59bd" containerID="fe2b1e62f60db38faadf074db3ea102158cffc992ec8b64a7c79d48b92388fea" exitCode=1 Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.297931 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" event={"ID":"38d54d71-93d1-4cde-940e-a371117f59bd","Type":"ContainerDied","Data":"fe2b1e62f60db38faadf074db3ea102158cffc992ec8b64a7c79d48b92388fea"} Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.298027 4725 scope.go:117] "RemoveContainer" containerID="5e76d57a4596bc632c0173008e0e0f6c349b127420099b35bb8ded5f17d695bb" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.298852 4725 scope.go:117] "RemoveContainer" containerID="fe2b1e62f60db38faadf074db3ea102158cffc992ec8b64a7c79d48b92388fea" Oct 14 13:15:34 crc kubenswrapper[4725]: E1014 13:15:34.299036 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9v9qj_openshift-ovn-kubernetes(38d54d71-93d1-4cde-940e-a371117f59bd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.316284 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3920ebe7d3eb6b372d24be3b36b16b12c48ea827aae7cf0d00fd707b6681d857\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:34Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.326227 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.326287 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.326308 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.326333 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.326348 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:34Z","lastTransitionTime":"2025-10-14T13:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.332179 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:34Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.351739 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55e8261f-c97f-4d46-b31f-6ef737e6c4c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92d8ba988206535f4ad53245b92484c9cfaf86d2a768d171b6a56a3d71ce2855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c3fa61126ab1de3e82ff31cd2d54b72d6a4355db0dbe16a1300f5349ec032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76dc71de9a8769dc0878cc796ad53a4355d128d9d3011601d1e10127a390b87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968caf419d5ddbb9213c7325a25516bdec5108f010a0e3457faa716b6f6f8955\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:34Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.367723 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:34Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.383268 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:34Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.396790 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:34Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.409207 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cxcmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cxcmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:34Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.426386 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:34Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.429412 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.429530 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.429548 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.429577 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.429594 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:34Z","lastTransitionTime":"2025-10-14T13:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.440406 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:34Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.455600 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f246f81af5a032ce9db26f4f99c6cbcc43eb7ff2674a8de64d385efea9ae280d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:34Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.471830 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:34Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.483233 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:34Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.497943 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:34Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.517522 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2b1e62f60db38faadf074db3ea102158cffc992ec8b64a7c79d48b92388fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e76d57a4596bc632c0173008e0e0f6c349b127420099b35bb8ded5f17d695bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"message\\\":\\\" 13:15:21.264935 6223 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr after 0 failed attempt(s)\\\\nI1014 13:15:21.264942 6223 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr\\\\nI1014 13:15:21.264930 6223 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1014 13:15:21.264952 6223 lb_config.go:1031] Cluster endpoints for openshift-config-operator/metrics for network=default are: map[]\\\\nI1014 13:15:21.264966 6223 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-cxcmw\\\\nI1014 13:15:21.264981 6223 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-cxcmw\\\\nI1014 13:15:21.264988 6223 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-cxcmw in node crc\\\\nI1014 13:15:21.265024 6223 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-cxcmw] creating logical port openshift-multus_network-metrics-daemon-cxcmw for pod on switch crc\\\\nF1014 13:15:21.264695 6223 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe2b1e62f60db38faadf074db3ea102158cffc992ec8b64a7c79d48b92388fea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:15:34Z\\\",\\\"message\\\":\\\".go:409] Going to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-cxcmw]\\\\nI1014 13:15:33.963152 6443 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1014 13:15:33.963169 6443 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-cxcmw before timer (time: 2025-10-14 13:15:34.927380579 +0000 UTC m=+1.581869588): skip\\\\nI1014 13:15:33.963178 6443 factory.go:656] Stopping watch factory\\\\nI1014 13:15:33.963184 6443 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 41.871µs)\\\\nI1014 13:15:33.963004 6443 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9mz9. OVN-Kubernetes controller took 3.408e-05 seconds. No OVN measurement.\\\\nI1014 13:15:33.963197 6443 ovnkube.go:599] Stopped ovnkube\\\\nI1014 13:15:33.963196 6443 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1014 13:15:33.963210 6443 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 13:15:33.963242 6443 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1014 13:15:33.963343 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:34Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.532581 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.532623 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.532637 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.532656 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.532668 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:34Z","lastTransitionTime":"2025-10-14T13:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.533484 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:34Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.549581 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3dd5bed9576c9be576b8118eff0593b0eaf3d8fd8614dfbf566906a78f249e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b75fa898729a1cbd704de8ab24f7d7bb470e00eb9f3ee51e23d86222961b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jbldr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:34Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.635444 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.635521 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.635536 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.635557 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.635571 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:34Z","lastTransitionTime":"2025-10-14T13:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.738380 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.738415 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.738426 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.738439 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.738470 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:34Z","lastTransitionTime":"2025-10-14T13:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.841569 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.841614 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.841626 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.841643 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.841656 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:34Z","lastTransitionTime":"2025-10-14T13:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.920327 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:15:34 crc kubenswrapper[4725]: E1014 13:15:34.920576 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.944280 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.944379 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.944405 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.944433 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:34 crc kubenswrapper[4725]: I1014 13:15:34.944484 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:34Z","lastTransitionTime":"2025-10-14T13:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.047793 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.047866 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.047882 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.047905 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.047918 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:35Z","lastTransitionTime":"2025-10-14T13:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.150331 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.150369 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.150385 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.150404 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.150415 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:35Z","lastTransitionTime":"2025-10-14T13:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.254912 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.254983 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.254994 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.255020 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.255034 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:35Z","lastTransitionTime":"2025-10-14T13:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.302513 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9v9qj_38d54d71-93d1-4cde-940e-a371117f59bd/ovnkube-controller/2.log" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.357839 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.357888 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.357900 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.357920 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.357939 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:35Z","lastTransitionTime":"2025-10-14T13:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.460199 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.460237 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.460249 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.460266 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.460278 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:35Z","lastTransitionTime":"2025-10-14T13:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.554639 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b-metrics-certs\") pod \"network-metrics-daemon-cxcmw\" (UID: \"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\") " pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:15:35 crc kubenswrapper[4725]: E1014 13:15:35.554861 4725 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 13:15:35 crc kubenswrapper[4725]: E1014 13:15:35.554941 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b-metrics-certs podName:c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b nodeName:}" failed. No retries permitted until 2025-10-14 13:15:51.554922947 +0000 UTC m=+68.403357756 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b-metrics-certs") pod "network-metrics-daemon-cxcmw" (UID: "c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.563033 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.563101 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.563120 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.563141 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.563158 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:35Z","lastTransitionTime":"2025-10-14T13:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.665907 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.665946 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.665956 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.665971 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.665984 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:35Z","lastTransitionTime":"2025-10-14T13:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.742999 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.744103 4725 scope.go:117] "RemoveContainer" containerID="fe2b1e62f60db38faadf074db3ea102158cffc992ec8b64a7c79d48b92388fea" Oct 14 13:15:35 crc kubenswrapper[4725]: E1014 13:15:35.744762 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9v9qj_openshift-ovn-kubernetes(38d54d71-93d1-4cde-940e-a371117f59bd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.762692 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3920ebe7d3eb6b372d24be3b36b16b12c48ea827aae7cf0d00fd707b6681d857\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:35Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.768447 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.768527 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.768544 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.768583 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.768599 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:35Z","lastTransitionTime":"2025-10-14T13:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.782579 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:35Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.797441 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cxcmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cxcmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:35Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.811969 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55e8261f-c97f-4d46-b31f-6ef737e6c4c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92d8ba988206535f4ad53245b92484c9cfaf86d2a768d171b6a56a3d71ce2855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c3fa61126ab1de3e82ff31cd2d54b72d6a4355db0dbe16a1300f5349ec032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76dc71de9a8769dc0878cc796ad53a4355d128d9d3011601d1e10127a390b87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968caf419d5ddbb9213c7325a25516bdec5108f010a0e3457faa716b6f6f8955\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:35Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.827172 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:35Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.840133 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:35Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.855447 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:35Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.870647 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.870705 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.870713 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.870726 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.870735 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:35Z","lastTransitionTime":"2025-10-14T13:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.871491 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:35Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.887715 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:35Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.903611 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f246f81af5a032ce9db26f4f99c6cbcc43eb7ff2674a8de64d385efea9ae280d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:35Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.913678 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3dd5bed9576c9be576b8118eff0593b0eaf3d8fd8614dfbf566906a78f249e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b75fa898729a1cbd704de8ab24f7d7bb470e00eb9f3ee51e23d86222961b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jbldr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:35Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.921157 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.921191 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.921211 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:35 crc kubenswrapper[4725]: E1014 13:15:35.921297 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:15:35 crc kubenswrapper[4725]: E1014 13:15:35.921377 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:15:35 crc kubenswrapper[4725]: E1014 13:15:35.921509 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.926003 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:35Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.936136 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:35Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.950257 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:35Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.970998 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2b1e62f60db38faadf074db3ea102158cffc992ec8b64a7c79d48b92388fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe2b1e62f60db38faadf074db3ea102158cffc992ec8b64a7c79d48b92388fea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:15:34Z\\\",\\\"message\\\":\\\".go:409] Going to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-cxcmw]\\\\nI1014 13:15:33.963152 6443 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1014 13:15:33.963169 6443 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-cxcmw before timer (time: 2025-10-14 13:15:34.927380579 +0000 UTC m=+1.581869588): skip\\\\nI1014 13:15:33.963178 6443 factory.go:656] Stopping watch factory\\\\nI1014 13:15:33.963184 6443 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 41.871µs)\\\\nI1014 13:15:33.963004 6443 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9mz9. OVN-Kubernetes controller took 3.408e-05 seconds. No OVN measurement.\\\\nI1014 13:15:33.963197 6443 ovnkube.go:599] Stopped ovnkube\\\\nI1014 13:15:33.963196 6443 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1014 13:15:33.963210 6443 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 13:15:33.963242 6443 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1014 13:15:33.963343 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9v9qj_openshift-ovn-kubernetes(38d54d71-93d1-4cde-940e-a371117f59bd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:35Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.973493 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.973540 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.973550 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.973570 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.973587 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:35Z","lastTransitionTime":"2025-10-14T13:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:35 crc kubenswrapper[4725]: I1014 13:15:35.985727 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:35Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.076944 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.077005 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.077016 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.077039 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.077051 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:36Z","lastTransitionTime":"2025-10-14T13:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.179500 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.179547 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.179559 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.179577 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.179590 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:36Z","lastTransitionTime":"2025-10-14T13:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.283003 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.283075 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.283093 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.283117 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.283134 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:36Z","lastTransitionTime":"2025-10-14T13:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.387174 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.387246 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.387271 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.387301 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.387323 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:36Z","lastTransitionTime":"2025-10-14T13:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.489727 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.489794 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.489813 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.489842 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.489861 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:36Z","lastTransitionTime":"2025-10-14T13:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.592777 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.592827 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.592840 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.592857 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.592871 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:36Z","lastTransitionTime":"2025-10-14T13:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.695668 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.695747 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.695770 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.695801 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.695826 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:36Z","lastTransitionTime":"2025-10-14T13:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.798690 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.798751 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.798770 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.798790 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.798806 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:36Z","lastTransitionTime":"2025-10-14T13:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.901628 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.901718 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.901737 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.901762 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.901778 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:36Z","lastTransitionTime":"2025-10-14T13:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:36 crc kubenswrapper[4725]: I1014 13:15:36.920397 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:15:36 crc kubenswrapper[4725]: E1014 13:15:36.920625 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.004341 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.004425 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.004442 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.004489 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.004504 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:37Z","lastTransitionTime":"2025-10-14T13:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.107538 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.107621 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.107635 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.107659 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.107672 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:37Z","lastTransitionTime":"2025-10-14T13:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.215931 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.215980 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.215991 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.216008 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.216018 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:37Z","lastTransitionTime":"2025-10-14T13:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.319117 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.319188 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.319206 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.319234 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.319257 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:37Z","lastTransitionTime":"2025-10-14T13:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.422281 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.422347 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.422360 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.422379 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.422392 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:37Z","lastTransitionTime":"2025-10-14T13:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.526066 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.526107 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.526119 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.526136 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.526147 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:37Z","lastTransitionTime":"2025-10-14T13:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.628591 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.628647 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.628657 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.628682 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.628695 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:37Z","lastTransitionTime":"2025-10-14T13:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.731318 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.731412 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.731446 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.731520 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.731542 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:37Z","lastTransitionTime":"2025-10-14T13:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.834493 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.834530 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.834539 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.834554 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.834563 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:37Z","lastTransitionTime":"2025-10-14T13:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.880677 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.880795 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:37 crc kubenswrapper[4725]: E1014 13:15:37.880886 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:16:09.880854515 +0000 UTC m=+86.729289354 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:15:37 crc kubenswrapper[4725]: E1014 13:15:37.880940 4725 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.880950 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.880998 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.881052 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:37 crc kubenswrapper[4725]: E1014 13:15:37.881188 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 13:15:37 crc kubenswrapper[4725]: E1014 13:15:37.881235 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 13:15:37 crc kubenswrapper[4725]: E1014 13:15:37.881229 4725 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 13:15:37 crc kubenswrapper[4725]: E1014 13:15:37.881198 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 13:16:09.881179444 +0000 UTC m=+86.729614293 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 13:15:37 crc kubenswrapper[4725]: E1014 13:15:37.881281 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 13:15:37 crc kubenswrapper[4725]: E1014 13:15:37.881328 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 13:15:37 crc kubenswrapper[4725]: E1014 13:15:37.881252 4725 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:15:37 crc kubenswrapper[4725]: E1014 13:15:37.881340 4725 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:15:37 crc kubenswrapper[4725]: E1014 13:15:37.881315 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 13:16:09.881292377 +0000 UTC m=+86.729727216 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 13:15:37 crc kubenswrapper[4725]: E1014 13:15:37.881410 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-14 13:16:09.88139267 +0000 UTC m=+86.729827479 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:15:37 crc kubenswrapper[4725]: E1014 13:15:37.881435 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-14 13:16:09.88142819 +0000 UTC m=+86.729862990 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.921121 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.921214 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.921242 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:37 crc kubenswrapper[4725]: E1014 13:15:37.921354 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:15:37 crc kubenswrapper[4725]: E1014 13:15:37.921589 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:15:37 crc kubenswrapper[4725]: E1014 13:15:37.921754 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.942110 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.942386 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.942408 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.942442 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:37 crc kubenswrapper[4725]: I1014 13:15:37.942481 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:37Z","lastTransitionTime":"2025-10-14T13:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.046170 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.046232 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.046246 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.046263 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.046277 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:38Z","lastTransitionTime":"2025-10-14T13:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.148031 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.148066 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.148074 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.148086 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.148094 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:38Z","lastTransitionTime":"2025-10-14T13:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.251522 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.251576 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.251588 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.251606 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.251619 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:38Z","lastTransitionTime":"2025-10-14T13:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.355202 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.355293 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.355314 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.355340 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.355359 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:38Z","lastTransitionTime":"2025-10-14T13:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.458639 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.458697 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.458706 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.458727 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.458737 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:38Z","lastTransitionTime":"2025-10-14T13:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.561185 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.561221 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.561230 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.561243 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.561252 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:38Z","lastTransitionTime":"2025-10-14T13:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.665713 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.665770 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.665801 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.665828 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.665847 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:38Z","lastTransitionTime":"2025-10-14T13:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.769806 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.769878 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.769901 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.769929 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.769953 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:38Z","lastTransitionTime":"2025-10-14T13:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.831851 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.847244 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:38Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.859391 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3dd5bed9576c9be576b8118eff0593b0eaf3d8fd8614dfbf566906a78f249e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b75fa898729a1cbd704de8ab24f7d7bb470e00eb9f3ee51e23d86222961b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jbldr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:38Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.872686 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.872732 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.872747 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.872767 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.872791 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:38Z","lastTransitionTime":"2025-10-14T13:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.878531 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:38Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.894328 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:38Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.908527 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:38Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.920644 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:15:38 crc kubenswrapper[4725]: E1014 13:15:38.920855 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.931289 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2b1e62f60db38faadf074db3ea102158cffc992ec8b64a7c79d48b92388fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe2b1e62f60db38faadf074db3ea102158cffc992ec8b64a7c79d48b92388fea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:15:34Z\\\",\\\"message\\\":\\\".go:409] Going to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-cxcmw]\\\\nI1014 13:15:33.963152 6443 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1014 13:15:33.963169 6443 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-cxcmw before timer (time: 2025-10-14 13:15:34.927380579 +0000 UTC m=+1.581869588): skip\\\\nI1014 13:15:33.963178 6443 factory.go:656] Stopping watch factory\\\\nI1014 13:15:33.963184 6443 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 41.871µs)\\\\nI1014 13:15:33.963004 6443 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9mz9. OVN-Kubernetes controller took 3.408e-05 seconds. No OVN measurement.\\\\nI1014 13:15:33.963197 6443 ovnkube.go:599] Stopped ovnkube\\\\nI1014 13:15:33.963196 6443 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1014 13:15:33.963210 6443 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 13:15:33.963242 6443 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1014 13:15:33.963343 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9v9qj_openshift-ovn-kubernetes(38d54d71-93d1-4cde-940e-a371117f59bd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:38Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.951116 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3920ebe7d3eb6b372d24be3b36b16b12c48ea827aae7cf0d00fd707b6681d857\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:38Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.969106 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:38Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.975214 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.975267 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.975281 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.975301 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.975315 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:38Z","lastTransitionTime":"2025-10-14T13:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:38 crc kubenswrapper[4725]: I1014 13:15:38.988165 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:38Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.003680 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cxcmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cxcmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:39Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.022525 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55e8261f-c97f-4d46-b31f-6ef737e6c4c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92d8ba988206535f4ad53245b92484c9cfaf86d2a768d171b6a56a3d71ce2855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c3fa61126ab1de3e82ff31cd2d54b72d6a4355db0dbe16a1300f5349ec032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76dc71de9a8769dc0878cc796ad53a4355d128d9d3011601d1e10127a390b87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968caf419d5ddbb9213c7325a25516bdec5108f010a0e3457faa716b6f6f8955\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:39Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.039518 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:39Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.058084 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:39Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.073372 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:39Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.077948 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.077987 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.077999 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.078017 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.078030 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:39Z","lastTransitionTime":"2025-10-14T13:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.088274 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:39Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.107100 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f246f81af5a032ce9db26f4f99c6cbcc43eb7ff2674a8de64d385efea9ae280d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:39Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.181996 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.182070 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.182087 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.182112 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.182131 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:39Z","lastTransitionTime":"2025-10-14T13:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.285331 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.285369 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.285380 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.285396 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.285409 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:39Z","lastTransitionTime":"2025-10-14T13:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.387990 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.388285 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.388370 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.388445 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.388555 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:39Z","lastTransitionTime":"2025-10-14T13:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.490848 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.490916 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.490931 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.490947 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.490959 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:39Z","lastTransitionTime":"2025-10-14T13:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.594222 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.594265 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.594275 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.594292 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.594304 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:39Z","lastTransitionTime":"2025-10-14T13:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.697929 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.698035 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.698052 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.698076 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.698092 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:39Z","lastTransitionTime":"2025-10-14T13:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.801309 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.801386 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.801407 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.801434 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.801483 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:39Z","lastTransitionTime":"2025-10-14T13:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.905301 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.905394 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.905418 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.905487 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.905512 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:39Z","lastTransitionTime":"2025-10-14T13:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.920856 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.920921 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:39 crc kubenswrapper[4725]: E1014 13:15:39.920985 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:15:39 crc kubenswrapper[4725]: I1014 13:15:39.920998 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:39 crc kubenswrapper[4725]: E1014 13:15:39.921079 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:15:39 crc kubenswrapper[4725]: E1014 13:15:39.921184 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.007721 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.007767 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.007775 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.007790 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.007799 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:40Z","lastTransitionTime":"2025-10-14T13:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.053719 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.066252 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.068442 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55e8261f-c97f-4d46-b31f-6ef737e6c4c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92d8ba988206535f4ad53245b92484c9cfaf86d2a768d171b6a56a3d71ce2855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c3fa61126ab1de3e82ff31cd2d54b72d6a4355db0dbe16a1300f5349ec032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76dc71de9a8769dc0878cc796ad53a4355d128d9d3011601d1e10127a390b87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968caf419d5ddbb9213c7325a25516bdec5108f010a0e3457faa716b6f6f8955\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:40Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.088224 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:40Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.103565 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:40Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.110183 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.110234 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.110247 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.110265 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.110277 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:40Z","lastTransitionTime":"2025-10-14T13:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.119502 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:40Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.132176 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cxcmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cxcmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:40Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.150201 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:40Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.166545 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:40Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.183066 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f246f81af5a032ce9db26f4f99c6cbcc43eb7ff2674a8de64d385efea9ae280d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:40Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.198203 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:40Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.213211 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.213254 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.213270 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.213289 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.213303 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:40Z","lastTransitionTime":"2025-10-14T13:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.213897 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:40Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.232078 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:40Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.254886 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2b1e62f60db38faadf074db3ea102158cffc992ec8b64a7c79d48b92388fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe2b1e62f60db38faadf074db3ea102158cffc992ec8b64a7c79d48b92388fea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:15:34Z\\\",\\\"message\\\":\\\".go:409] Going to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-cxcmw]\\\\nI1014 13:15:33.963152 6443 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1014 13:15:33.963169 6443 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-cxcmw before timer (time: 2025-10-14 13:15:34.927380579 +0000 UTC m=+1.581869588): skip\\\\nI1014 13:15:33.963178 6443 factory.go:656] Stopping watch factory\\\\nI1014 13:15:33.963184 6443 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 41.871µs)\\\\nI1014 13:15:33.963004 6443 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9mz9. OVN-Kubernetes controller took 3.408e-05 seconds. No OVN measurement.\\\\nI1014 13:15:33.963197 6443 ovnkube.go:599] Stopped ovnkube\\\\nI1014 13:15:33.963196 6443 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1014 13:15:33.963210 6443 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 13:15:33.963242 6443 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1014 13:15:33.963343 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9v9qj_openshift-ovn-kubernetes(38d54d71-93d1-4cde-940e-a371117f59bd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:40Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.267522 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:40Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.282346 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3dd5bed9576c9be576b8118eff0593b0eaf3d8fd8614dfbf566906a78f249e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b75fa898729a1cbd704de8ab24f7d7bb470e00eb9f3ee51e23d86222961b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jbldr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:40Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.301245 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3920ebe7d3eb6b372d24be3b36b16b12c48ea827aae7cf0d00fd707b6681d857\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:40Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.315498 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:40Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.315743 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.315786 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.315796 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.315814 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.315826 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:40Z","lastTransitionTime":"2025-10-14T13:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.419524 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.419573 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.419586 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.419609 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.419624 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:40Z","lastTransitionTime":"2025-10-14T13:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.523513 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.523566 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.523582 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.523626 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.523642 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:40Z","lastTransitionTime":"2025-10-14T13:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.626831 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.626907 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.626930 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.626960 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.626985 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:40Z","lastTransitionTime":"2025-10-14T13:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.730321 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.730361 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.730378 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.730398 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.730411 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:40Z","lastTransitionTime":"2025-10-14T13:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.833799 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.833867 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.833881 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.833904 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.833921 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:40Z","lastTransitionTime":"2025-10-14T13:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.920597 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:15:40 crc kubenswrapper[4725]: E1014 13:15:40.920799 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.937015 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.937063 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.937073 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.937086 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:40 crc kubenswrapper[4725]: I1014 13:15:40.937097 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:40Z","lastTransitionTime":"2025-10-14T13:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.039347 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.039410 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.039421 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.039442 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.039480 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:41Z","lastTransitionTime":"2025-10-14T13:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.142860 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.142927 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.142947 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.142973 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.142986 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:41Z","lastTransitionTime":"2025-10-14T13:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.246332 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.246398 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.246487 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.246516 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.246538 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:41Z","lastTransitionTime":"2025-10-14T13:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.349823 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.349892 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.349911 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.349938 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.349955 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:41Z","lastTransitionTime":"2025-10-14T13:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.452736 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.452784 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.452798 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.452817 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.452829 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:41Z","lastTransitionTime":"2025-10-14T13:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.556048 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.556312 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.556431 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.556660 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.556757 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:41Z","lastTransitionTime":"2025-10-14T13:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.660002 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.660049 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.660060 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.660076 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.660085 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:41Z","lastTransitionTime":"2025-10-14T13:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.763332 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.763409 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.763431 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.763503 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.763528 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:41Z","lastTransitionTime":"2025-10-14T13:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.866943 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.867025 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.867043 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.867069 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.867086 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:41Z","lastTransitionTime":"2025-10-14T13:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.920937 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.920958 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.920981 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:41 crc kubenswrapper[4725]: E1014 13:15:41.921156 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:15:41 crc kubenswrapper[4725]: E1014 13:15:41.921266 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:15:41 crc kubenswrapper[4725]: E1014 13:15:41.921337 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.970228 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.970300 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.970308 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.970329 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:41 crc kubenswrapper[4725]: I1014 13:15:41.970342 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:41Z","lastTransitionTime":"2025-10-14T13:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.073790 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.073840 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.073853 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.073874 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.073891 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:42Z","lastTransitionTime":"2025-10-14T13:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.177598 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.178347 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.178385 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.178405 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.178418 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:42Z","lastTransitionTime":"2025-10-14T13:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.281490 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.281543 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.281554 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.281575 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.281586 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:42Z","lastTransitionTime":"2025-10-14T13:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.385158 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.385221 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.385241 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.385272 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.385294 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:42Z","lastTransitionTime":"2025-10-14T13:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.396149 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.396199 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.396218 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.396241 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.396259 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:42Z","lastTransitionTime":"2025-10-14T13:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:42 crc kubenswrapper[4725]: E1014 13:15:42.418951 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:42Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.423982 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.424222 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.424263 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.424296 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.424317 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:42Z","lastTransitionTime":"2025-10-14T13:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:42 crc kubenswrapper[4725]: E1014 13:15:42.444322 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:42Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.449577 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.449645 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.449670 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.449709 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.449733 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:42Z","lastTransitionTime":"2025-10-14T13:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:42 crc kubenswrapper[4725]: E1014 13:15:42.467026 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:42Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.472141 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.472200 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.472221 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.472249 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.472269 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:42Z","lastTransitionTime":"2025-10-14T13:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:42 crc kubenswrapper[4725]: E1014 13:15:42.490815 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:42Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.496939 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.496996 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.497013 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.497038 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.497055 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:42Z","lastTransitionTime":"2025-10-14T13:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:42 crc kubenswrapper[4725]: E1014 13:15:42.511625 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:42Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:42 crc kubenswrapper[4725]: E1014 13:15:42.511786 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.513668 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.513714 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.513731 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.513756 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.513777 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:42Z","lastTransitionTime":"2025-10-14T13:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.617370 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.617426 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.617438 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.617475 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.617491 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:42Z","lastTransitionTime":"2025-10-14T13:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.720293 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.720365 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.720409 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.720436 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.720505 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:42Z","lastTransitionTime":"2025-10-14T13:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.824100 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.824177 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.824194 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.824219 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.824236 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:42Z","lastTransitionTime":"2025-10-14T13:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.920788 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:15:42 crc kubenswrapper[4725]: E1014 13:15:42.920986 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.926816 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.926872 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.926882 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.926902 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:42 crc kubenswrapper[4725]: I1014 13:15:42.926919 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:42Z","lastTransitionTime":"2025-10-14T13:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.030807 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.030867 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.030879 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.030901 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.030916 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:43Z","lastTransitionTime":"2025-10-14T13:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.134166 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.134211 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.134222 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.134238 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.134249 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:43Z","lastTransitionTime":"2025-10-14T13:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.236476 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.236506 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.236516 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.236531 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.236543 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:43Z","lastTransitionTime":"2025-10-14T13:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.339674 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.339765 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.339796 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.339827 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.339850 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:43Z","lastTransitionTime":"2025-10-14T13:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.442662 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.443149 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.443345 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.443386 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.443626 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:43Z","lastTransitionTime":"2025-10-14T13:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.547325 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.547396 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.547414 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.547489 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.547505 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:43Z","lastTransitionTime":"2025-10-14T13:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.650930 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.651425 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.651571 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.651677 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.651767 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:43Z","lastTransitionTime":"2025-10-14T13:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.754224 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.754628 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.754891 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.755150 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.755377 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:43Z","lastTransitionTime":"2025-10-14T13:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.858868 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.859246 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.859388 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.859583 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.859757 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:43Z","lastTransitionTime":"2025-10-14T13:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.920859 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:43 crc kubenswrapper[4725]: E1014 13:15:43.920990 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.920863 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.921057 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:43 crc kubenswrapper[4725]: E1014 13:15:43.921151 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:15:43 crc kubenswrapper[4725]: E1014 13:15:43.921222 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.938864 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:43Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.954673 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:43Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.962565 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.962606 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.962623 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.962639 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.962671 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:43Z","lastTransitionTime":"2025-10-14T13:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:43 crc kubenswrapper[4725]: I1014 13:15:43.985926 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2b1e62f60db38faadf074db3ea102158cffc992ec8b64a7c79d48b92388fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe2b1e62f60db38faadf074db3ea102158cffc992ec8b64a7c79d48b92388fea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:15:34Z\\\",\\\"message\\\":\\\".go:409] Going to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-cxcmw]\\\\nI1014 13:15:33.963152 6443 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1014 13:15:33.963169 6443 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-cxcmw before timer (time: 2025-10-14 13:15:34.927380579 +0000 UTC m=+1.581869588): skip\\\\nI1014 13:15:33.963178 6443 factory.go:656] Stopping watch factory\\\\nI1014 13:15:33.963184 6443 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 41.871µs)\\\\nI1014 13:15:33.963004 6443 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9mz9. OVN-Kubernetes controller took 3.408e-05 seconds. No OVN measurement.\\\\nI1014 13:15:33.963197 6443 ovnkube.go:599] Stopped ovnkube\\\\nI1014 13:15:33.963196 6443 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1014 13:15:33.963210 6443 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 13:15:33.963242 6443 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1014 13:15:33.963343 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9v9qj_openshift-ovn-kubernetes(38d54d71-93d1-4cde-940e-a371117f59bd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:43Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.000849 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:43Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.016754 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3dd5bed9576c9be576b8118eff0593b0eaf3d8fd8614dfbf566906a78f249e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b75fa898729a1cbd704de8ab24f7d7bb470e00eb9f3ee51e23d86222961b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jbldr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:44Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.033172 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:44Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.052212 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3920ebe7d3eb6b372d24be3b36b16b12c48ea827aae7cf0d00fd707b6681d857\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:44Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.064882 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.065068 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.065083 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.065104 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.065118 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:44Z","lastTransitionTime":"2025-10-14T13:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.067242 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:44Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.081357 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ca808e-edcf-41b9-99a5-f10fcbbe6e72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7db8a1777567b9c9c186da537b33dd2a68202d45022e33ec691385e647ef5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a26fe24f459eeb79dbe24b7b637389b9ba47f6880369e92a4c823d216d17b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c401885a70b38cf87c5573364a8d07a79bd451eca6a6263abb37958614e9cf8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24408f31c736057fab41e161d394fe99f6d932020a796afc376013e2164aa209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24408f31c736057fab41e161d394fe99f6d932020a796afc376013e2164aa209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:44Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.102992 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:44Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.120141 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:44Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.139318 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:44Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.155194 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cxcmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cxcmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:44Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.168664 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.168736 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.168749 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.168794 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.168808 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:44Z","lastTransitionTime":"2025-10-14T13:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.171833 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55e8261f-c97f-4d46-b31f-6ef737e6c4c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92d8ba988206535f4ad53245b92484c9cfaf86d2a768d171b6a56a3d71ce2855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c3fa61126ab1de3e82ff31cd2d54b72d6a4355db0dbe16a1300f5349ec032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76dc71de9a8769dc0878cc796ad53a4355d128d9d3011601d1e10127a390b87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968caf419d5ddbb9213c7325a25516bdec5108f010a0e3457faa716b6f6f8955\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:44Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.189481 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:44Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.204305 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:44Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.228755 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f246f81af5a032ce9db26f4f99c6cbcc43eb7ff2674a8de64d385efea9ae280d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:44Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.272408 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.272484 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.272496 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.272517 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.272531 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:44Z","lastTransitionTime":"2025-10-14T13:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.376055 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.376107 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.376121 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.376141 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.376154 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:44Z","lastTransitionTime":"2025-10-14T13:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.479629 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.479688 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.479704 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.479724 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.479739 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:44Z","lastTransitionTime":"2025-10-14T13:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.583797 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.584095 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.584127 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.584167 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.584193 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:44Z","lastTransitionTime":"2025-10-14T13:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.687672 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.687741 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.687753 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.687777 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.687793 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:44Z","lastTransitionTime":"2025-10-14T13:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.791023 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.791094 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.791109 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.791138 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.791153 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:44Z","lastTransitionTime":"2025-10-14T13:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.896392 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.896451 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.896481 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.896506 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.896516 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:44Z","lastTransitionTime":"2025-10-14T13:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:44 crc kubenswrapper[4725]: I1014 13:15:44.920708 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:15:44 crc kubenswrapper[4725]: E1014 13:15:44.921080 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.000013 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.000853 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.000924 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.001004 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.001077 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:45Z","lastTransitionTime":"2025-10-14T13:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.104478 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.104565 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.104578 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.104602 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.104618 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:45Z","lastTransitionTime":"2025-10-14T13:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.206732 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.206791 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.206810 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.206835 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.206853 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:45Z","lastTransitionTime":"2025-10-14T13:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.309799 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.310150 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.310286 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.310437 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.310689 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:45Z","lastTransitionTime":"2025-10-14T13:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.415919 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.415972 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.415988 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.416011 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.416026 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:45Z","lastTransitionTime":"2025-10-14T13:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.518519 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.518592 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.518605 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.518624 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.518636 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:45Z","lastTransitionTime":"2025-10-14T13:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.621525 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.621573 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.621581 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.621596 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.621605 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:45Z","lastTransitionTime":"2025-10-14T13:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.724498 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.724546 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.724611 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.724632 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.724645 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:45Z","lastTransitionTime":"2025-10-14T13:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.827382 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.827483 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.827505 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.827533 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.827554 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:45Z","lastTransitionTime":"2025-10-14T13:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.921117 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.921202 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.921303 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:45 crc kubenswrapper[4725]: E1014 13:15:45.921504 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:15:45 crc kubenswrapper[4725]: E1014 13:15:45.921634 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:15:45 crc kubenswrapper[4725]: E1014 13:15:45.921808 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.930102 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.930154 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.930167 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.930183 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:45 crc kubenswrapper[4725]: I1014 13:15:45.930193 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:45Z","lastTransitionTime":"2025-10-14T13:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.033846 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.033910 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.033922 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.033940 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.033951 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:46Z","lastTransitionTime":"2025-10-14T13:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.136140 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.136188 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.136197 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.136215 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.136225 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:46Z","lastTransitionTime":"2025-10-14T13:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.239670 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.239723 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.239733 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.239752 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.239764 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:46Z","lastTransitionTime":"2025-10-14T13:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.343559 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.343646 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.343659 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.343682 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.343694 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:46Z","lastTransitionTime":"2025-10-14T13:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.446535 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.446594 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.446606 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.446631 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.446647 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:46Z","lastTransitionTime":"2025-10-14T13:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.550219 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.550278 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.550290 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.550311 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.550331 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:46Z","lastTransitionTime":"2025-10-14T13:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.653019 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.653064 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.653079 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.653101 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.653117 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:46Z","lastTransitionTime":"2025-10-14T13:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.756482 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.756969 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.757064 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.757150 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.757225 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:46Z","lastTransitionTime":"2025-10-14T13:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.860486 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.860547 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.860557 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.860717 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.860732 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:46Z","lastTransitionTime":"2025-10-14T13:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.920471 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:15:46 crc kubenswrapper[4725]: E1014 13:15:46.920720 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.921667 4725 scope.go:117] "RemoveContainer" containerID="fe2b1e62f60db38faadf074db3ea102158cffc992ec8b64a7c79d48b92388fea" Oct 14 13:15:46 crc kubenswrapper[4725]: E1014 13:15:46.921853 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9v9qj_openshift-ovn-kubernetes(38d54d71-93d1-4cde-940e-a371117f59bd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.963893 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.963955 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.963972 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.963994 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:46 crc kubenswrapper[4725]: I1014 13:15:46.964011 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:46Z","lastTransitionTime":"2025-10-14T13:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.067102 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.067204 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.067220 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.067241 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.067257 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:47Z","lastTransitionTime":"2025-10-14T13:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.171577 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.171636 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.171650 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.171669 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.171680 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:47Z","lastTransitionTime":"2025-10-14T13:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.275709 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.275822 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.275844 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.275878 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.275898 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:47Z","lastTransitionTime":"2025-10-14T13:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.379944 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.380006 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.380028 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.380057 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.380081 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:47Z","lastTransitionTime":"2025-10-14T13:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.483644 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.483703 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.483719 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.483745 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.483762 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:47Z","lastTransitionTime":"2025-10-14T13:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.586703 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.587129 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.587208 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.587293 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.587389 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:47Z","lastTransitionTime":"2025-10-14T13:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.691469 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.691518 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.691531 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.691550 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.691562 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:47Z","lastTransitionTime":"2025-10-14T13:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.794903 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.794963 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.794983 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.795060 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.795078 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:47Z","lastTransitionTime":"2025-10-14T13:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.898711 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.898790 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.898806 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.898829 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.898843 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:47Z","lastTransitionTime":"2025-10-14T13:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.921733 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.921812 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:47 crc kubenswrapper[4725]: I1014 13:15:47.921874 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:47 crc kubenswrapper[4725]: E1014 13:15:47.922305 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:15:47 crc kubenswrapper[4725]: E1014 13:15:47.922435 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:15:47 crc kubenswrapper[4725]: E1014 13:15:47.922256 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.002964 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.003029 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.003046 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.003070 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.003086 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:48Z","lastTransitionTime":"2025-10-14T13:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.105998 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.106053 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.106065 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.106084 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.106095 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:48Z","lastTransitionTime":"2025-10-14T13:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.209568 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.209637 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.209652 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.209674 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.209686 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:48Z","lastTransitionTime":"2025-10-14T13:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.312633 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.312720 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.312732 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.312750 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.312763 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:48Z","lastTransitionTime":"2025-10-14T13:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.415930 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.415984 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.416005 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.416027 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.416038 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:48Z","lastTransitionTime":"2025-10-14T13:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.518554 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.518609 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.518624 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.518642 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.518657 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:48Z","lastTransitionTime":"2025-10-14T13:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.621677 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.621766 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.621779 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.621820 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.621833 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:48Z","lastTransitionTime":"2025-10-14T13:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.724868 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.724924 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.724936 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.724963 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.724976 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:48Z","lastTransitionTime":"2025-10-14T13:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.828715 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.828771 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.828782 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.828803 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.828815 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:48Z","lastTransitionTime":"2025-10-14T13:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.920916 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:15:48 crc kubenswrapper[4725]: E1014 13:15:48.921032 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.932315 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.932887 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.932976 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.933076 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:48 crc kubenswrapper[4725]: I1014 13:15:48.933168 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:48Z","lastTransitionTime":"2025-10-14T13:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.036721 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.036758 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.036769 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.036786 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.036800 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:49Z","lastTransitionTime":"2025-10-14T13:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.138872 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.138915 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.138928 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.138948 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.138960 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:49Z","lastTransitionTime":"2025-10-14T13:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.244560 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.244598 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.244608 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.244623 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.244634 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:49Z","lastTransitionTime":"2025-10-14T13:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.347313 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.347421 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.347502 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.347540 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.347612 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:49Z","lastTransitionTime":"2025-10-14T13:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.450776 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.450842 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.450859 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.450883 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.450903 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:49Z","lastTransitionTime":"2025-10-14T13:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.553557 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.553597 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.553607 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.553624 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.553635 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:49Z","lastTransitionTime":"2025-10-14T13:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.657993 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.658088 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.658107 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.658134 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.658153 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:49Z","lastTransitionTime":"2025-10-14T13:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.761767 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.761822 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.761835 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.761902 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.761917 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:49Z","lastTransitionTime":"2025-10-14T13:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.864583 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.864647 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.864658 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.864676 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.864690 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:49Z","lastTransitionTime":"2025-10-14T13:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.920247 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.920284 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.920390 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:49 crc kubenswrapper[4725]: E1014 13:15:49.920514 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:15:49 crc kubenswrapper[4725]: E1014 13:15:49.920608 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:15:49 crc kubenswrapper[4725]: E1014 13:15:49.920673 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.967397 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.967441 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.967473 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.967512 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:49 crc kubenswrapper[4725]: I1014 13:15:49.967524 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:49Z","lastTransitionTime":"2025-10-14T13:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.070729 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.070802 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.070819 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.070847 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.070864 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:50Z","lastTransitionTime":"2025-10-14T13:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.173838 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.173873 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.173882 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.173897 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.173910 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:50Z","lastTransitionTime":"2025-10-14T13:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.276277 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.276308 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.276319 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.276332 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.276342 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:50Z","lastTransitionTime":"2025-10-14T13:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.378699 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.378727 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.378746 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.378759 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.378770 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:50Z","lastTransitionTime":"2025-10-14T13:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.480664 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.480703 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.480721 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.480738 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.480750 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:50Z","lastTransitionTime":"2025-10-14T13:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.583916 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.583998 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.584007 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.584023 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.584032 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:50Z","lastTransitionTime":"2025-10-14T13:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.687093 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.687132 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.687143 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.687163 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.687175 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:50Z","lastTransitionTime":"2025-10-14T13:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.791208 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.791272 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.791289 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.791314 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.791332 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:50Z","lastTransitionTime":"2025-10-14T13:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.893802 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.893927 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.893992 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.894017 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.894078 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:50Z","lastTransitionTime":"2025-10-14T13:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.920175 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:15:50 crc kubenswrapper[4725]: E1014 13:15:50.920345 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.997589 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.997634 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.997648 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.997665 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:50 crc kubenswrapper[4725]: I1014 13:15:50.997678 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:50Z","lastTransitionTime":"2025-10-14T13:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.099886 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.099943 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.099954 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.099971 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.099981 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:51Z","lastTransitionTime":"2025-10-14T13:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.202879 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.202928 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.202940 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.202957 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.202969 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:51Z","lastTransitionTime":"2025-10-14T13:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.304837 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.304893 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.304901 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.304914 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.304922 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:51Z","lastTransitionTime":"2025-10-14T13:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.407274 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.407349 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.407362 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.407386 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.407400 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:51Z","lastTransitionTime":"2025-10-14T13:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.510263 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.510291 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.510301 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.510315 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.510325 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:51Z","lastTransitionTime":"2025-10-14T13:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.612236 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.612270 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.612281 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.612296 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.612307 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:51Z","lastTransitionTime":"2025-10-14T13:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.638687 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b-metrics-certs\") pod \"network-metrics-daemon-cxcmw\" (UID: \"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\") " pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:15:51 crc kubenswrapper[4725]: E1014 13:15:51.638917 4725 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 13:15:51 crc kubenswrapper[4725]: E1014 13:15:51.639045 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b-metrics-certs podName:c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b nodeName:}" failed. No retries permitted until 2025-10-14 13:16:23.639020258 +0000 UTC m=+100.487455077 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b-metrics-certs") pod "network-metrics-daemon-cxcmw" (UID: "c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.715975 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.716059 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.716076 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.716095 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.716105 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:51Z","lastTransitionTime":"2025-10-14T13:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.818876 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.818952 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.818972 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.818989 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.819000 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:51Z","lastTransitionTime":"2025-10-14T13:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.927801 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.927912 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:51 crc kubenswrapper[4725]: E1014 13:15:51.928053 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.927801 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:51 crc kubenswrapper[4725]: E1014 13:15:51.928215 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:15:51 crc kubenswrapper[4725]: E1014 13:15:51.928495 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.929550 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.929576 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.929584 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.929621 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:51 crc kubenswrapper[4725]: I1014 13:15:51.929632 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:51Z","lastTransitionTime":"2025-10-14T13:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.031971 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.032013 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.032049 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.032066 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.032079 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:52Z","lastTransitionTime":"2025-10-14T13:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.135562 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.135613 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.135625 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.135640 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.135651 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:52Z","lastTransitionTime":"2025-10-14T13:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.239306 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.239381 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.239395 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.239416 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.239430 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:52Z","lastTransitionTime":"2025-10-14T13:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.342212 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.342260 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.342271 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.342289 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.342302 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:52Z","lastTransitionTime":"2025-10-14T13:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.445181 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.445217 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.445228 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.445245 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.445257 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:52Z","lastTransitionTime":"2025-10-14T13:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.547766 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.547807 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.547818 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.547834 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.547846 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:52Z","lastTransitionTime":"2025-10-14T13:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.650504 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.650572 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.650596 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.650628 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.650650 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:52Z","lastTransitionTime":"2025-10-14T13:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.753407 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.753517 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.753545 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.753571 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.753589 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:52Z","lastTransitionTime":"2025-10-14T13:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.790727 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.790778 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.790795 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.790817 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.790833 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:52Z","lastTransitionTime":"2025-10-14T13:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:52 crc kubenswrapper[4725]: E1014 13:15:52.812492 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:52Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.817915 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.817961 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.817980 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.818002 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.818018 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:52Z","lastTransitionTime":"2025-10-14T13:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:52 crc kubenswrapper[4725]: E1014 13:15:52.836631 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:52Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.842029 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.842121 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.842145 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.842219 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.842246 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:52Z","lastTransitionTime":"2025-10-14T13:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:52 crc kubenswrapper[4725]: E1014 13:15:52.862419 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:52Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.866860 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.866919 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.866970 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.866994 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.867010 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:52Z","lastTransitionTime":"2025-10-14T13:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:52 crc kubenswrapper[4725]: E1014 13:15:52.884918 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:52Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.890758 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.890806 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.890821 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.890840 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.890852 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:52Z","lastTransitionTime":"2025-10-14T13:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:52 crc kubenswrapper[4725]: E1014 13:15:52.906658 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:52Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:52 crc kubenswrapper[4725]: E1014 13:15:52.906820 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.909123 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.909170 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.909204 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.909222 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.909235 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:52Z","lastTransitionTime":"2025-10-14T13:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:52 crc kubenswrapper[4725]: I1014 13:15:52.920508 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:15:52 crc kubenswrapper[4725]: E1014 13:15:52.920643 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.012922 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.012970 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.012982 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.012999 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.013011 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:53Z","lastTransitionTime":"2025-10-14T13:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.117647 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.117688 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.117699 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.117719 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.117767 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:53Z","lastTransitionTime":"2025-10-14T13:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.220303 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.220344 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.220355 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.220372 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.220384 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:53Z","lastTransitionTime":"2025-10-14T13:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.323638 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.323695 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.323707 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.323730 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.323746 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:53Z","lastTransitionTime":"2025-10-14T13:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.426115 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.426333 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.426430 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.426555 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.426650 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:53Z","lastTransitionTime":"2025-10-14T13:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.529810 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.529847 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.529856 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.529871 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.529880 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:53Z","lastTransitionTime":"2025-10-14T13:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.632272 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.632301 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.632310 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.632322 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.632330 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:53Z","lastTransitionTime":"2025-10-14T13:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.734794 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.734862 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.734879 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.734904 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.734922 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:53Z","lastTransitionTime":"2025-10-14T13:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.844556 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.844622 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.844640 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.844664 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.844681 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:53Z","lastTransitionTime":"2025-10-14T13:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.920920 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.920998 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:53 crc kubenswrapper[4725]: E1014 13:15:53.921183 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:15:53 crc kubenswrapper[4725]: E1014 13:15:53.921545 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.921635 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:53 crc kubenswrapper[4725]: E1014 13:15:53.921725 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.937363 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:53Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.948421 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.948489 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.948504 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.948522 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.948557 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:53Z","lastTransitionTime":"2025-10-14T13:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.951407 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:53Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:53 crc kubenswrapper[4725]: I1014 13:15:53.964908 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:53Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:53.988442 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2b1e62f60db38faadf074db3ea102158cffc992ec8b64a7c79d48b92388fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe2b1e62f60db38faadf074db3ea102158cffc992ec8b64a7c79d48b92388fea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:15:34Z\\\",\\\"message\\\":\\\".go:409] Going to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-cxcmw]\\\\nI1014 13:15:33.963152 6443 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1014 13:15:33.963169 6443 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-cxcmw before timer (time: 2025-10-14 13:15:34.927380579 +0000 UTC m=+1.581869588): skip\\\\nI1014 13:15:33.963178 6443 factory.go:656] Stopping watch factory\\\\nI1014 13:15:33.963184 6443 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 41.871µs)\\\\nI1014 13:15:33.963004 6443 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9mz9. OVN-Kubernetes controller took 3.408e-05 seconds. No OVN measurement.\\\\nI1014 13:15:33.963197 6443 ovnkube.go:599] Stopped ovnkube\\\\nI1014 13:15:33.963196 6443 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1014 13:15:33.963210 6443 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 13:15:33.963242 6443 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1014 13:15:33.963343 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9v9qj_openshift-ovn-kubernetes(38d54d71-93d1-4cde-940e-a371117f59bd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:53Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.001287 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:53Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.015568 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3dd5bed9576c9be576b8118eff0593b0eaf3d8fd8614dfbf566906a78f249e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b75fa898729a1cbd704de8ab24f7d7bb470e00eb9f3ee51e23d86222961b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jbldr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:54Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.034698 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ca808e-edcf-41b9-99a5-f10fcbbe6e72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7db8a1777567b9c9c186da537b33dd2a68202d45022e33ec691385e647ef5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a26fe24f459eeb79dbe24b7b637389b9ba47f6880369e92a4c823d216d17b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c401885a70b38cf87c5573364a8d07a79bd451eca6a6263abb37958614e9cf8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24408f31c736057fab41e161d394fe99f6d932020a796afc376013e2164aa209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24408f31c736057fab41e161d394fe99f6d932020a796afc376013e2164aa209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:54Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.051337 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3920ebe7d3eb6b372d24be3b36b16b12c48ea827aae7cf0d00fd707b6681d857\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:54Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.052046 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.052110 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.052131 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.052157 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.052371 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:54Z","lastTransitionTime":"2025-10-14T13:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.066687 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:54Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.081737 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55e8261f-c97f-4d46-b31f-6ef737e6c4c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92d8ba988206535f4ad53245b92484c9cfaf86d2a768d171b6a56a3d71ce2855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c3fa61126ab1de3e82ff31cd2d54b72d6a4355db0dbe16a1300f5349ec032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76dc71de9a8769dc0878cc796ad53a4355d128d9d3011601d1e10127a390b87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968caf419d5ddbb9213c7325a25516bdec5108f010a0e3457faa716b6f6f8955\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:54Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.097713 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:54Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.111259 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:54Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.124289 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:54Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.134164 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cxcmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cxcmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:54Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.148465 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:54Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.155018 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.155082 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.155093 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.155119 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.155133 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:54Z","lastTransitionTime":"2025-10-14T13:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.160277 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:54Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.176760 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f246f81af5a032ce9db26f4f99c6cbcc43eb7ff2674a8de64d385efea9ae280d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:54Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.257970 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.258023 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.258036 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.258051 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.258061 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:54Z","lastTransitionTime":"2025-10-14T13:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.359948 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.360946 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.361085 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.361213 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.361371 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:54Z","lastTransitionTime":"2025-10-14T13:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.464582 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.464652 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.464666 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.464685 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.464697 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:54Z","lastTransitionTime":"2025-10-14T13:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.568205 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.568519 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.568756 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.568971 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.569120 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:54Z","lastTransitionTime":"2025-10-14T13:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.672323 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.672361 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.672374 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.672389 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.672400 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:54Z","lastTransitionTime":"2025-10-14T13:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.775364 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.775688 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.775802 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.775888 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.775969 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:54Z","lastTransitionTime":"2025-10-14T13:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.878248 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.878274 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.878283 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.878297 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.878307 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:54Z","lastTransitionTime":"2025-10-14T13:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.921047 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:15:54 crc kubenswrapper[4725]: E1014 13:15:54.921202 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.981425 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.981540 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.981564 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.981592 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:54 crc kubenswrapper[4725]: I1014 13:15:54.981610 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:54Z","lastTransitionTime":"2025-10-14T13:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.084877 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.084917 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.084928 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.084942 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.084953 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:55Z","lastTransitionTime":"2025-10-14T13:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.187219 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.187488 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.187716 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.187855 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.188123 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:55Z","lastTransitionTime":"2025-10-14T13:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.290635 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.290686 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.290697 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.290714 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.290726 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:55Z","lastTransitionTime":"2025-10-14T13:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.376402 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kbgwl_d4ed727c-f4d1-47cd-a218-e22803eb1750/kube-multus/0.log" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.376472 4725 generic.go:334] "Generic (PLEG): container finished" podID="d4ed727c-f4d1-47cd-a218-e22803eb1750" containerID="e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6" exitCode=1 Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.376507 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kbgwl" event={"ID":"d4ed727c-f4d1-47cd-a218-e22803eb1750","Type":"ContainerDied","Data":"e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6"} Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.376945 4725 scope.go:117] "RemoveContainer" containerID="e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.392719 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ca808e-edcf-41b9-99a5-f10fcbbe6e72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7db8a1777567b9c9c186da537b33dd2a68202d45022e33ec691385e647ef5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a26fe24f459eeb79dbe24b7b637389b9ba47f6880369e92a4c823d216d17b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c401885a70b38cf87c5573364a8d07a79bd451eca6a6263abb37958614e9cf8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24408f31c736057fab41e161d394fe99f6d932020a796afc376013e2164aa209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24408f31c736057fab41e161d394fe99f6d932020a796afc376013e2164aa209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:55Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.392843 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.393360 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.393371 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.393393 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.393406 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:55Z","lastTransitionTime":"2025-10-14T13:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.411096 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3920ebe7d3eb6b372d24be3b36b16b12c48ea827aae7cf0d00fd707b6681d857\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:55Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.427012 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:55Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.445470 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55e8261f-c97f-4d46-b31f-6ef737e6c4c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92d8ba988206535f4ad53245b92484c9cfaf86d2a768d171b6a56a3d71ce2855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c3fa61126ab1de3e82ff31cd2d54b72d6a4355db0dbe16a1300f5349ec032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76dc71de9a8769dc0878cc796ad53a4355d128d9d3011601d1e10127a390b87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968caf419d5ddbb9213c7325a25516bdec5108f010a0e3457faa716b6f6f8955\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:55Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.465371 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:55Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.485622 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:55Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.495630 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.495665 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.495674 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.495689 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.495698 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:55Z","lastTransitionTime":"2025-10-14T13:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.504928 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:15:55Z\\\",\\\"message\\\":\\\"2025-10-14T13:15:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_43913cc9-3166-40cc-b691-06541d666bb9\\\\n2025-10-14T13:15:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_43913cc9-3166-40cc-b691-06541d666bb9 to /host/opt/cni/bin/\\\\n2025-10-14T13:15:09Z [verbose] multus-daemon started\\\\n2025-10-14T13:15:09Z [verbose] Readiness Indicator file check\\\\n2025-10-14T13:15:54Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:55Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.520643 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cxcmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cxcmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:55Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.536904 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:55Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.552899 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:55Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.569712 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f246f81af5a032ce9db26f4f99c6cbcc43eb7ff2674a8de64d385efea9ae280d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:55Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.581325 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:55Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.591309 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:55Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.598125 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.598147 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.598155 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.598169 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.598180 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:55Z","lastTransitionTime":"2025-10-14T13:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.602198 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:55Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.625775 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2b1e62f60db38faadf074db3ea102158cffc992ec8b64a7c79d48b92388fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe2b1e62f60db38faadf074db3ea102158cffc992ec8b64a7c79d48b92388fea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:15:34Z\\\",\\\"message\\\":\\\".go:409] Going to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-cxcmw]\\\\nI1014 13:15:33.963152 6443 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1014 13:15:33.963169 6443 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-cxcmw before timer (time: 2025-10-14 13:15:34.927380579 +0000 UTC m=+1.581869588): skip\\\\nI1014 13:15:33.963178 6443 factory.go:656] Stopping watch factory\\\\nI1014 13:15:33.963184 6443 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 41.871µs)\\\\nI1014 13:15:33.963004 6443 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9mz9. OVN-Kubernetes controller took 3.408e-05 seconds. No OVN measurement.\\\\nI1014 13:15:33.963197 6443 ovnkube.go:599] Stopped ovnkube\\\\nI1014 13:15:33.963196 6443 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1014 13:15:33.963210 6443 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 13:15:33.963242 6443 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1014 13:15:33.963343 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9v9qj_openshift-ovn-kubernetes(38d54d71-93d1-4cde-940e-a371117f59bd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:55Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.638293 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:55Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.656842 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3dd5bed9576c9be576b8118eff0593b0eaf3d8fd8614dfbf566906a78f249e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b75fa898729a1cbd704de8ab24f7d7bb470e00eb9f3ee51e23d86222961b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jbldr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:55Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.700874 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.700984 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.701004 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.701025 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.701038 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:55Z","lastTransitionTime":"2025-10-14T13:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.807706 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.807745 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.807753 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.807769 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.807778 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:55Z","lastTransitionTime":"2025-10-14T13:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.910097 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.910128 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.910147 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.910164 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.910174 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:55Z","lastTransitionTime":"2025-10-14T13:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.920632 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.920645 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:55 crc kubenswrapper[4725]: E1014 13:15:55.920806 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:15:55 crc kubenswrapper[4725]: I1014 13:15:55.920655 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:55 crc kubenswrapper[4725]: E1014 13:15:55.920887 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:15:55 crc kubenswrapper[4725]: E1014 13:15:55.921013 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.012661 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.012702 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.012711 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.012725 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.012735 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:56Z","lastTransitionTime":"2025-10-14T13:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.116069 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.116108 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.116116 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.116130 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.116139 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:56Z","lastTransitionTime":"2025-10-14T13:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.218610 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.218686 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.218702 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.218725 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.218746 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:56Z","lastTransitionTime":"2025-10-14T13:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.320694 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.320772 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.320783 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.320800 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.320813 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:56Z","lastTransitionTime":"2025-10-14T13:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.381874 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kbgwl_d4ed727c-f4d1-47cd-a218-e22803eb1750/kube-multus/0.log" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.381943 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kbgwl" event={"ID":"d4ed727c-f4d1-47cd-a218-e22803eb1750","Type":"ContainerStarted","Data":"b42368f230c49822144cc9910a09a22f268f840c3540e0a68f7c16659e5061bb"} Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.396212 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ca808e-edcf-41b9-99a5-f10fcbbe6e72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7db8a1777567b9c9c186da537b33dd2a68202d45022e33ec691385e647ef5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a26fe24f459eeb79dbe24b7b637389b9ba47f6880369e92a4c823d216d17b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c401885a70b38cf87c5573364a8d07a79bd451eca6a6263abb37958614e9cf8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24408f31c736057fab41e161d394fe99f6d932020a796afc376013e2164aa209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24408f31c736057fab41e161d394fe99f6d932020a796afc376013e2164aa209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:56Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.413486 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3920ebe7d3eb6b372d24be3b36b16b12c48ea827aae7cf0d00fd707b6681d857\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:56Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.423596 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.423645 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.423656 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.423677 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.423690 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:56Z","lastTransitionTime":"2025-10-14T13:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.428297 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:56Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.443630 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:56Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.458647 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42368f230c49822144cc9910a09a22f268f840c3540e0a68f7c16659e5061bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:15:55Z\\\",\\\"message\\\":\\\"2025-10-14T13:15:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_43913cc9-3166-40cc-b691-06541d666bb9\\\\n2025-10-14T13:15:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_43913cc9-3166-40cc-b691-06541d666bb9 to /host/opt/cni/bin/\\\\n2025-10-14T13:15:09Z [verbose] multus-daemon started\\\\n2025-10-14T13:15:09Z [verbose] Readiness Indicator file check\\\\n2025-10-14T13:15:54Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:56Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.471067 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cxcmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cxcmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:56Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.486525 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55e8261f-c97f-4d46-b31f-6ef737e6c4c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92d8ba988206535f4ad53245b92484c9cfaf86d2a768d171b6a56a3d71ce2855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c3fa61126ab1de3e82ff31cd2d54b72d6a4355db0dbe16a1300f5349ec032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76dc71de9a8769dc0878cc796ad53a4355d128d9d3011601d1e10127a390b87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968caf419d5ddbb9213c7325a25516bdec5108f010a0e3457faa716b6f6f8955\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:56Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.499732 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:56Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.518171 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f246f81af5a032ce9db26f4f99c6cbcc43eb7ff2674a8de64d385efea9ae280d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:56Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.526528 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.526575 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.526587 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.526603 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.526613 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:56Z","lastTransitionTime":"2025-10-14T13:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.533323 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:56Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.547330 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:56Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.568687 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2b1e62f60db38faadf074db3ea102158cffc992ec8b64a7c79d48b92388fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe2b1e62f60db38faadf074db3ea102158cffc992ec8b64a7c79d48b92388fea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:15:34Z\\\",\\\"message\\\":\\\".go:409] Going to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-cxcmw]\\\\nI1014 13:15:33.963152 6443 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1014 13:15:33.963169 6443 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-cxcmw before timer (time: 2025-10-14 13:15:34.927380579 +0000 UTC m=+1.581869588): skip\\\\nI1014 13:15:33.963178 6443 factory.go:656] Stopping watch factory\\\\nI1014 13:15:33.963184 6443 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 41.871µs)\\\\nI1014 13:15:33.963004 6443 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9mz9. OVN-Kubernetes controller took 3.408e-05 seconds. No OVN measurement.\\\\nI1014 13:15:33.963197 6443 ovnkube.go:599] Stopped ovnkube\\\\nI1014 13:15:33.963196 6443 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1014 13:15:33.963210 6443 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 13:15:33.963242 6443 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1014 13:15:33.963343 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9v9qj_openshift-ovn-kubernetes(38d54d71-93d1-4cde-940e-a371117f59bd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:56Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.582251 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:56Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.595332 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3dd5bed9576c9be576b8118eff0593b0eaf3d8fd8614dfbf566906a78f249e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b75fa898729a1cbd704de8ab24f7d7bb470e00eb9f3ee51e23d86222961b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jbldr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:56Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.617661 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:56Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.629707 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.629757 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.629771 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.629792 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.629804 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:56Z","lastTransitionTime":"2025-10-14T13:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.640707 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:56Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.655570 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:15:56Z is after 2025-08-24T17:21:41Z" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.732685 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.732783 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.732808 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.732839 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.732864 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:56Z","lastTransitionTime":"2025-10-14T13:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.835568 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.835615 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.835627 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.835648 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.835661 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:56Z","lastTransitionTime":"2025-10-14T13:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.920827 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:15:56 crc kubenswrapper[4725]: E1014 13:15:56.921088 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.938282 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.938334 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.938351 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.938369 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:56 crc kubenswrapper[4725]: I1014 13:15:56.938384 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:56Z","lastTransitionTime":"2025-10-14T13:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.041736 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.041791 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.041802 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.041821 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.041833 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:57Z","lastTransitionTime":"2025-10-14T13:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.145421 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.145484 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.145495 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.145510 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.145519 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:57Z","lastTransitionTime":"2025-10-14T13:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.247919 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.247968 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.247979 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.247998 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.248009 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:57Z","lastTransitionTime":"2025-10-14T13:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.350724 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.350770 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.350782 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.350801 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.350813 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:57Z","lastTransitionTime":"2025-10-14T13:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.452887 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.452928 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.452941 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.452957 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.452968 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:57Z","lastTransitionTime":"2025-10-14T13:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.556372 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.556436 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.556478 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.556507 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.556533 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:57Z","lastTransitionTime":"2025-10-14T13:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.660033 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.660092 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.660103 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.660126 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.660140 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:57Z","lastTransitionTime":"2025-10-14T13:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.763255 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.763543 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.763554 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.763576 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.763593 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:57Z","lastTransitionTime":"2025-10-14T13:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.866091 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.866138 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.866152 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.866168 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.866183 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:57Z","lastTransitionTime":"2025-10-14T13:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.920839 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.921040 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:57 crc kubenswrapper[4725]: E1014 13:15:57.921142 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.921241 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:57 crc kubenswrapper[4725]: E1014 13:15:57.921404 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:15:57 crc kubenswrapper[4725]: E1014 13:15:57.921807 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.969191 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.969253 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.969266 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.969288 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:57 crc kubenswrapper[4725]: I1014 13:15:57.969311 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:57Z","lastTransitionTime":"2025-10-14T13:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.072652 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.072810 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.072827 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.072860 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.072874 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:58Z","lastTransitionTime":"2025-10-14T13:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.175958 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.176074 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.176087 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.176104 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.176116 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:58Z","lastTransitionTime":"2025-10-14T13:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.282083 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.282163 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.282184 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.282209 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.282235 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:58Z","lastTransitionTime":"2025-10-14T13:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.385727 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.385794 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.385817 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.385845 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.385868 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:58Z","lastTransitionTime":"2025-10-14T13:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.489791 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.489845 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.489864 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.489887 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.489906 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:58Z","lastTransitionTime":"2025-10-14T13:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.607357 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.607406 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.607424 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.607498 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.607521 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:58Z","lastTransitionTime":"2025-10-14T13:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.710513 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.710563 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.710579 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.710603 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.710620 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:58Z","lastTransitionTime":"2025-10-14T13:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.813638 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.813703 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.813717 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.813736 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.813749 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:58Z","lastTransitionTime":"2025-10-14T13:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.916648 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.916702 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.916719 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.916743 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.916760 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:58Z","lastTransitionTime":"2025-10-14T13:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:58 crc kubenswrapper[4725]: I1014 13:15:58.921055 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:15:58 crc kubenswrapper[4725]: E1014 13:15:58.921267 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.019952 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.020004 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.020014 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.020030 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.020041 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:59Z","lastTransitionTime":"2025-10-14T13:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.123681 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.123750 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.123774 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.123803 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.123828 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:59Z","lastTransitionTime":"2025-10-14T13:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.226506 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.226550 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.226577 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.226627 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.226644 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:59Z","lastTransitionTime":"2025-10-14T13:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.329054 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.329099 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.329113 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.329134 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.329150 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:59Z","lastTransitionTime":"2025-10-14T13:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.431816 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.431874 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.431890 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.431913 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.431932 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:59Z","lastTransitionTime":"2025-10-14T13:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.534396 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.534433 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.534468 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.534484 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.534495 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:59Z","lastTransitionTime":"2025-10-14T13:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.636288 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.636316 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.636323 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.636336 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.636344 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:59Z","lastTransitionTime":"2025-10-14T13:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.738694 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.738764 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.738789 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.738819 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.738841 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:59Z","lastTransitionTime":"2025-10-14T13:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.840615 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.840645 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.840653 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.840665 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.840674 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:59Z","lastTransitionTime":"2025-10-14T13:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.920052 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.920089 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.920052 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:15:59 crc kubenswrapper[4725]: E1014 13:15:59.920160 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:15:59 crc kubenswrapper[4725]: E1014 13:15:59.920209 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:15:59 crc kubenswrapper[4725]: E1014 13:15:59.920287 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.942890 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.942934 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.942944 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.942959 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:15:59 crc kubenswrapper[4725]: I1014 13:15:59.942971 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:15:59Z","lastTransitionTime":"2025-10-14T13:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.045750 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.045797 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.045806 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.045821 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.045831 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:00Z","lastTransitionTime":"2025-10-14T13:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.148193 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.148234 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.148243 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.148256 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.148265 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:00Z","lastTransitionTime":"2025-10-14T13:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.250512 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.250545 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.250554 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.250566 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.250576 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:00Z","lastTransitionTime":"2025-10-14T13:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.353671 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.353724 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.353740 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.353759 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.353771 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:00Z","lastTransitionTime":"2025-10-14T13:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.456554 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.456609 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.456622 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.456644 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.456661 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:00Z","lastTransitionTime":"2025-10-14T13:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.559999 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.560078 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.560103 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.560133 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.560160 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:00Z","lastTransitionTime":"2025-10-14T13:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.663978 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.664055 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.664076 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.664110 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.664131 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:00Z","lastTransitionTime":"2025-10-14T13:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.767595 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.767644 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.767655 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.767673 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.767686 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:00Z","lastTransitionTime":"2025-10-14T13:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.871242 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.871308 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.871338 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.871365 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.871382 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:00Z","lastTransitionTime":"2025-10-14T13:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.920702 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:16:00 crc kubenswrapper[4725]: E1014 13:16:00.920954 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.922101 4725 scope.go:117] "RemoveContainer" containerID="fe2b1e62f60db38faadf074db3ea102158cffc992ec8b64a7c79d48b92388fea" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.974554 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.974602 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.974612 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.974626 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:00 crc kubenswrapper[4725]: I1014 13:16:00.974635 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:00Z","lastTransitionTime":"2025-10-14T13:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.077339 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.077402 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.077423 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.077495 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.077517 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:01Z","lastTransitionTime":"2025-10-14T13:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.180930 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.181366 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.181557 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.181734 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.181877 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:01Z","lastTransitionTime":"2025-10-14T13:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.292002 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.292056 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.292070 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.292093 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.292110 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:01Z","lastTransitionTime":"2025-10-14T13:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.395765 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.395799 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.395807 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.395820 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.395829 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:01Z","lastTransitionTime":"2025-10-14T13:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.498807 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.498847 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.498856 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.498870 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.498880 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:01Z","lastTransitionTime":"2025-10-14T13:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.601994 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.602036 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.602047 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.602062 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.602073 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:01Z","lastTransitionTime":"2025-10-14T13:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.705006 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.705057 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.705071 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.705088 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.705100 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:01Z","lastTransitionTime":"2025-10-14T13:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.808231 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.808280 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.808296 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.808320 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.808339 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:01Z","lastTransitionTime":"2025-10-14T13:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.911377 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.911485 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.911508 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.911543 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.911581 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:01Z","lastTransitionTime":"2025-10-14T13:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.920793 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.920819 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:16:01 crc kubenswrapper[4725]: E1014 13:16:01.920958 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:16:01 crc kubenswrapper[4725]: I1014 13:16:01.920996 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:16:01 crc kubenswrapper[4725]: E1014 13:16:01.921129 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:16:01 crc kubenswrapper[4725]: E1014 13:16:01.921236 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.014226 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.014268 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.014279 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.014293 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.014301 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:02Z","lastTransitionTime":"2025-10-14T13:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.117620 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.117664 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.117677 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.117693 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.117703 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:02Z","lastTransitionTime":"2025-10-14T13:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.220885 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.220951 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.220970 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.220996 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.221013 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:02Z","lastTransitionTime":"2025-10-14T13:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.325060 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.325129 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.325153 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.325184 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.325203 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:02Z","lastTransitionTime":"2025-10-14T13:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.403711 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9v9qj_38d54d71-93d1-4cde-940e-a371117f59bd/ovnkube-controller/2.log" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.407862 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" event={"ID":"38d54d71-93d1-4cde-940e-a371117f59bd","Type":"ContainerStarted","Data":"c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9"} Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.427499 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.427561 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.427580 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.427603 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.427622 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:02Z","lastTransitionTime":"2025-10-14T13:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.536951 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.537019 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.537036 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.537063 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.537081 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:02Z","lastTransitionTime":"2025-10-14T13:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.639666 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.639714 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.639725 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.639743 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.639755 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:02Z","lastTransitionTime":"2025-10-14T13:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.741635 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.741672 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.741682 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.741699 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.741709 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:02Z","lastTransitionTime":"2025-10-14T13:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.844419 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.844478 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.844490 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.844508 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.844520 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:02Z","lastTransitionTime":"2025-10-14T13:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.920104 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:16:02 crc kubenswrapper[4725]: E1014 13:16:02.920314 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.938437 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.938502 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.938516 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.938535 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.938553 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:02Z","lastTransitionTime":"2025-10-14T13:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:02 crc kubenswrapper[4725]: E1014 13:16:02.952486 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:02Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.961239 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.961291 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.961303 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.961327 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.961351 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:02Z","lastTransitionTime":"2025-10-14T13:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:02 crc kubenswrapper[4725]: E1014 13:16:02.976065 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:02Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.981197 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.981229 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.981240 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.981258 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:02 crc kubenswrapper[4725]: I1014 13:16:02.981269 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:02Z","lastTransitionTime":"2025-10-14T13:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:02 crc kubenswrapper[4725]: E1014 13:16:02.997619 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:02Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.001891 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.001927 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.001938 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.001960 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.001973 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:03Z","lastTransitionTime":"2025-10-14T13:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:03 crc kubenswrapper[4725]: E1014 13:16:03.015584 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:03Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.020318 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.020376 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.020395 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.020419 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.020438 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:03Z","lastTransitionTime":"2025-10-14T13:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:03 crc kubenswrapper[4725]: E1014 13:16:03.033340 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:03Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:03 crc kubenswrapper[4725]: E1014 13:16:03.033611 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.035540 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.035589 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.035603 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.035623 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.035636 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:03Z","lastTransitionTime":"2025-10-14T13:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.139100 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.139168 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.139186 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.139212 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.139232 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:03Z","lastTransitionTime":"2025-10-14T13:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.242745 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.242818 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.242847 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.242879 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.242909 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:03Z","lastTransitionTime":"2025-10-14T13:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.346133 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.346187 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.346199 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.346217 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.346234 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:03Z","lastTransitionTime":"2025-10-14T13:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.414314 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9v9qj_38d54d71-93d1-4cde-940e-a371117f59bd/ovnkube-controller/3.log" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.415553 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9v9qj_38d54d71-93d1-4cde-940e-a371117f59bd/ovnkube-controller/2.log" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.419495 4725 generic.go:334] "Generic (PLEG): container finished" podID="38d54d71-93d1-4cde-940e-a371117f59bd" containerID="c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9" exitCode=1 Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.419560 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" event={"ID":"38d54d71-93d1-4cde-940e-a371117f59bd","Type":"ContainerDied","Data":"c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9"} Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.419617 4725 scope.go:117] "RemoveContainer" containerID="fe2b1e62f60db38faadf074db3ea102158cffc992ec8b64a7c79d48b92388fea" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.420720 4725 scope.go:117] "RemoveContainer" containerID="c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9" Oct 14 13:16:03 crc kubenswrapper[4725]: E1014 13:16:03.420985 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9v9qj_openshift-ovn-kubernetes(38d54d71-93d1-4cde-940e-a371117f59bd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.441694 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:03Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.448668 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.448729 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.448755 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.448782 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.448808 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:03Z","lastTransitionTime":"2025-10-14T13:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.454772 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:03Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.470601 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:03Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.501094 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe2b1e62f60db38faadf074db3ea102158cffc992ec8b64a7c79d48b92388fea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:15:34Z\\\",\\\"message\\\":\\\".go:409] Going to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-cxcmw]\\\\nI1014 13:15:33.963152 6443 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1014 13:15:33.963169 6443 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-cxcmw before timer (time: 2025-10-14 13:15:34.927380579 +0000 UTC m=+1.581869588): skip\\\\nI1014 13:15:33.963178 6443 factory.go:656] Stopping watch factory\\\\nI1014 13:15:33.963184 6443 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 41.871µs)\\\\nI1014 13:15:33.963004 6443 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9mz9. OVN-Kubernetes controller took 3.408e-05 seconds. No OVN measurement.\\\\nI1014 13:15:33.963197 6443 ovnkube.go:599] Stopped ovnkube\\\\nI1014 13:15:33.963196 6443 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1014 13:15:33.963210 6443 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 13:15:33.963242 6443 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1014 13:15:33.963343 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:16:03Z\\\",\\\"message\\\":\\\"g{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager/controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.149\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1014 13:16:03.045916 6802 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-n9mfx\\\\nI1014 13:16:03.045925 6802 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-n9mfx in node crc\\\\nI1014 13:16:03.045931 6802 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-n9mfx after 0 failed attempt(s)\\\\nI1014 13:16:03.045938 6802 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-n9mfx\\\\nF1014 13:16:03.045902 6802 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared inform\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:03Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.515227 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:03Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.535181 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3dd5bed9576c9be576b8118eff0593b0eaf3d8fd8614dfbf566906a78f249e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b75fa898729a1cbd704de8ab24f7d7bb470e00eb9f3ee51e23d86222961b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jbldr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:03Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.549998 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ca808e-edcf-41b9-99a5-f10fcbbe6e72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7db8a1777567b9c9c186da537b33dd2a68202d45022e33ec691385e647ef5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a26fe24f459eeb79dbe24b7b637389b9ba47f6880369e92a4c823d216d17b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c401885a70b38cf87c5573364a8d07a79bd451eca6a6263abb37958614e9cf8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24408f31c736057fab41e161d394fe99f6d932020a796afc376013e2164aa209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24408f31c736057fab41e161d394fe99f6d932020a796afc376013e2164aa209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:03Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.551483 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.551519 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.551531 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.551546 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.551558 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:03Z","lastTransitionTime":"2025-10-14T13:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.564944 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3920ebe7d3eb6b372d24be3b36b16b12c48ea827aae7cf0d00fd707b6681d857\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:03Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.576076 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:03Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.589820 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55e8261f-c97f-4d46-b31f-6ef737e6c4c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92d8ba988206535f4ad53245b92484c9cfaf86d2a768d171b6a56a3d71ce2855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c3fa61126ab1de3e82ff31cd2d54b72d6a4355db0dbe16a1300f5349ec032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76dc71de9a8769dc0878cc796ad53a4355d128d9d3011601d1e10127a390b87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968caf419d5ddbb9213c7325a25516bdec5108f010a0e3457faa716b6f6f8955\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:03Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.602781 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:03Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.616270 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:03Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.631196 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42368f230c49822144cc9910a09a22f268f840c3540e0a68f7c16659e5061bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:15:55Z\\\",\\\"message\\\":\\\"2025-10-14T13:15:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_43913cc9-3166-40cc-b691-06541d666bb9\\\\n2025-10-14T13:15:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_43913cc9-3166-40cc-b691-06541d666bb9 to /host/opt/cni/bin/\\\\n2025-10-14T13:15:09Z [verbose] multus-daemon started\\\\n2025-10-14T13:15:09Z [verbose] Readiness Indicator file check\\\\n2025-10-14T13:15:54Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:03Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.642535 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cxcmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cxcmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:03Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.655668 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.655714 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.655723 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.655743 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.655760 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:03Z","lastTransitionTime":"2025-10-14T13:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.658149 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:03Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.671002 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:03Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.685783 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f246f81af5a032ce9db26f4f99c6cbcc43eb7ff2674a8de64d385efea9ae280d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:03Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.760371 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.760483 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.760499 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.760517 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.760530 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:03Z","lastTransitionTime":"2025-10-14T13:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.862748 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.863155 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.863174 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.863199 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.863219 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:03Z","lastTransitionTime":"2025-10-14T13:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.921128 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:16:03 crc kubenswrapper[4725]: E1014 13:16:03.921322 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.921935 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:16:03 crc kubenswrapper[4725]: E1014 13:16:03.922086 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.922317 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:16:03 crc kubenswrapper[4725]: E1014 13:16:03.922409 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.942073 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:03Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.958260 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42368f230c49822144cc9910a09a22f268f840c3540e0a68f7c16659e5061bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:15:55Z\\\",\\\"message\\\":\\\"2025-10-14T13:15:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_43913cc9-3166-40cc-b691-06541d666bb9\\\\n2025-10-14T13:15:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_43913cc9-3166-40cc-b691-06541d666bb9 to /host/opt/cni/bin/\\\\n2025-10-14T13:15:09Z [verbose] multus-daemon started\\\\n2025-10-14T13:15:09Z [verbose] Readiness Indicator file check\\\\n2025-10-14T13:15:54Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:03Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.970320 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.970362 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.970373 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.970388 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.970399 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:03Z","lastTransitionTime":"2025-10-14T13:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.976222 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cxcmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cxcmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:03Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:03 crc kubenswrapper[4725]: I1014 13:16:03.995414 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55e8261f-c97f-4d46-b31f-6ef737e6c4c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92d8ba988206535f4ad53245b92484c9cfaf86d2a768d171b6a56a3d71ce2855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c3fa61126ab1de3e82ff31cd2d54b72d6a4355db0dbe16a1300f5349ec032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76dc71de9a8769dc0878cc796ad53a4355d128d9d3011601d1e10127a390b87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968caf419d5ddbb9213c7325a25516bdec5108f010a0e3457faa716b6f6f8955\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:03Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.010316 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:04Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.031898 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f246f81af5a032ce9db26f4f99c6cbcc43eb7ff2674a8de64d385efea9ae280d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:04Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.051270 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:04Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.069301 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:04Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.073337 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.073373 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.073383 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.073397 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.073410 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:04Z","lastTransitionTime":"2025-10-14T13:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.091780 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe2b1e62f60db38faadf074db3ea102158cffc992ec8b64a7c79d48b92388fea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:15:34Z\\\",\\\"message\\\":\\\".go:409] Going to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-cxcmw]\\\\nI1014 13:15:33.963152 6443 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1014 13:15:33.963169 6443 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-cxcmw before timer (time: 2025-10-14 13:15:34.927380579 +0000 UTC m=+1.581869588): skip\\\\nI1014 13:15:33.963178 6443 factory.go:656] Stopping watch factory\\\\nI1014 13:15:33.963184 6443 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 41.871µs)\\\\nI1014 13:15:33.963004 6443 ovnkube_controller.go:1292] Config duration recorder: kind/namespace/name pod/openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9mz9. OVN-Kubernetes controller took 3.408e-05 seconds. No OVN measurement.\\\\nI1014 13:15:33.963197 6443 ovnkube.go:599] Stopped ovnkube\\\\nI1014 13:15:33.963196 6443 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1014 13:15:33.963210 6443 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1014 13:15:33.963242 6443 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1014 13:15:33.963343 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:16:03Z\\\",\\\"message\\\":\\\"g{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager/controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.149\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1014 13:16:03.045916 6802 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-n9mfx\\\\nI1014 13:16:03.045925 6802 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-n9mfx in node crc\\\\nI1014 13:16:03.045931 6802 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-n9mfx after 0 failed attempt(s)\\\\nI1014 13:16:03.045938 6802 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-n9mfx\\\\nF1014 13:16:03.045902 6802 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared inform\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:16:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:04Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.104729 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:04Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.122794 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3dd5bed9576c9be576b8118eff0593b0eaf3d8fd8614dfbf566906a78f249e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b75fa898729a1cbd704de8ab24f7d7bb470e00eb9f3ee51e23d86222961b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jbldr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:04Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.136530 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:04Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.150633 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:04Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.164044 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:04Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.176389 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.176471 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.176490 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.176536 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.176553 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:04Z","lastTransitionTime":"2025-10-14T13:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.179992 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ca808e-edcf-41b9-99a5-f10fcbbe6e72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7db8a1777567b9c9c186da537b33dd2a68202d45022e33ec691385e647ef5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a26fe24f459eeb79dbe24b7b637389b9ba47f6880369e92a4c823d216d17b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c401885a70b38cf87c5573364a8d07a79bd451eca6a6263abb37958614e9cf8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24408f31c736057fab41e161d394fe99f6d932020a796afc376013e2164aa209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24408f31c736057fab41e161d394fe99f6d932020a796afc376013e2164aa209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:04Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.195687 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3920ebe7d3eb6b372d24be3b36b16b12c48ea827aae7cf0d00fd707b6681d857\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:04Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.210423 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:04Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.279008 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.279057 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.279069 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.279086 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.279097 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:04Z","lastTransitionTime":"2025-10-14T13:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.382328 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.382397 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.382419 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.382443 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.382514 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:04Z","lastTransitionTime":"2025-10-14T13:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.424754 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9v9qj_38d54d71-93d1-4cde-940e-a371117f59bd/ovnkube-controller/3.log" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.428272 4725 scope.go:117] "RemoveContainer" containerID="c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9" Oct 14 13:16:04 crc kubenswrapper[4725]: E1014 13:16:04.428627 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9v9qj_openshift-ovn-kubernetes(38d54d71-93d1-4cde-940e-a371117f59bd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.447340 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:04Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.459412 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:04Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.471364 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:04Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.485264 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.485335 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.485359 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.485391 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.485432 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:04Z","lastTransitionTime":"2025-10-14T13:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.498008 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:16:03Z\\\",\\\"message\\\":\\\"g{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager/controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.149\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1014 13:16:03.045916 6802 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-n9mfx\\\\nI1014 13:16:03.045925 6802 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-n9mfx in node crc\\\\nI1014 13:16:03.045931 6802 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-n9mfx after 0 failed attempt(s)\\\\nI1014 13:16:03.045938 6802 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-n9mfx\\\\nF1014 13:16:03.045902 6802 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared inform\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:16:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9v9qj_openshift-ovn-kubernetes(38d54d71-93d1-4cde-940e-a371117f59bd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:04Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.508910 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:04Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.518721 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3dd5bed9576c9be576b8118eff0593b0eaf3d8fd8614dfbf566906a78f249e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b75fa898729a1cbd704de8ab24f7d7bb470e00eb9f3ee51e23d86222961b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jbldr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:04Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.532752 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ca808e-edcf-41b9-99a5-f10fcbbe6e72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7db8a1777567b9c9c186da537b33dd2a68202d45022e33ec691385e647ef5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a26fe24f459eeb79dbe24b7b637389b9ba47f6880369e92a4c823d216d17b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c401885a70b38cf87c5573364a8d07a79bd451eca6a6263abb37958614e9cf8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24408f31c736057fab41e161d394fe99f6d932020a796afc376013e2164aa209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24408f31c736057fab41e161d394fe99f6d932020a796afc376013e2164aa209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:04Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.552515 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3920ebe7d3eb6b372d24be3b36b16b12c48ea827aae7cf0d00fd707b6681d857\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:04Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.568246 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:04Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.585176 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55e8261f-c97f-4d46-b31f-6ef737e6c4c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92d8ba988206535f4ad53245b92484c9cfaf86d2a768d171b6a56a3d71ce2855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c3fa61126ab1de3e82ff31cd2d54b72d6a4355db0dbe16a1300f5349ec032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76dc71de9a8769dc0878cc796ad53a4355d128d9d3011601d1e10127a390b87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968caf419d5ddbb9213c7325a25516bdec5108f010a0e3457faa716b6f6f8955\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:04Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.587971 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.588023 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.588045 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.588074 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.588096 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:04Z","lastTransitionTime":"2025-10-14T13:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.597666 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:04Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.614321 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:04Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.628582 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42368f230c49822144cc9910a09a22f268f840c3540e0a68f7c16659e5061bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:15:55Z\\\",\\\"message\\\":\\\"2025-10-14T13:15:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_43913cc9-3166-40cc-b691-06541d666bb9\\\\n2025-10-14T13:15:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_43913cc9-3166-40cc-b691-06541d666bb9 to /host/opt/cni/bin/\\\\n2025-10-14T13:15:09Z [verbose] multus-daemon started\\\\n2025-10-14T13:15:09Z [verbose] Readiness Indicator file check\\\\n2025-10-14T13:15:54Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:04Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.639420 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cxcmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cxcmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:04Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.657645 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:04Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.670356 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:04Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.689525 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f246f81af5a032ce9db26f4f99c6cbcc43eb7ff2674a8de64d385efea9ae280d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:04Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.690243 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.690285 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.690295 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.690312 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.690323 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:04Z","lastTransitionTime":"2025-10-14T13:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.793149 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.793211 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.793227 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.793254 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.793274 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:04Z","lastTransitionTime":"2025-10-14T13:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.896246 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.896343 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.896371 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.896415 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.896438 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:04Z","lastTransitionTime":"2025-10-14T13:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.920909 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:16:04 crc kubenswrapper[4725]: E1014 13:16:04.921083 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.999289 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.999351 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.999390 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:04 crc kubenswrapper[4725]: I1014 13:16:04.999421 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:04.999440 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:04Z","lastTransitionTime":"2025-10-14T13:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.102780 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.102836 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.102844 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.102860 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.102869 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:05Z","lastTransitionTime":"2025-10-14T13:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.205426 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.205474 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.205484 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.205497 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.205505 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:05Z","lastTransitionTime":"2025-10-14T13:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.308728 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.308784 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.308797 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.308815 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.308828 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:05Z","lastTransitionTime":"2025-10-14T13:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.411999 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.412080 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.412098 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.412125 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.412143 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:05Z","lastTransitionTime":"2025-10-14T13:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.515899 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.515969 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.515989 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.516017 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.516036 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:05Z","lastTransitionTime":"2025-10-14T13:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.619318 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.619590 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.619622 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.619652 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.619672 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:05Z","lastTransitionTime":"2025-10-14T13:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.723089 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.723135 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.723145 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.723162 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.723171 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:05Z","lastTransitionTime":"2025-10-14T13:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.743574 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.744687 4725 scope.go:117] "RemoveContainer" containerID="c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9" Oct 14 13:16:05 crc kubenswrapper[4725]: E1014 13:16:05.744886 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9v9qj_openshift-ovn-kubernetes(38d54d71-93d1-4cde-940e-a371117f59bd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.826273 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.826348 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.826361 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.826378 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.826389 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:05Z","lastTransitionTime":"2025-10-14T13:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.920975 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:16:05 crc kubenswrapper[4725]: E1014 13:16:05.921135 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.921165 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.921217 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:16:05 crc kubenswrapper[4725]: E1014 13:16:05.921433 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:16:05 crc kubenswrapper[4725]: E1014 13:16:05.921646 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.928907 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.928936 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.928946 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.928960 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.928972 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:05Z","lastTransitionTime":"2025-10-14T13:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:05 crc kubenswrapper[4725]: I1014 13:16:05.933755 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.032361 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.032418 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.032426 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.032438 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.032462 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:06Z","lastTransitionTime":"2025-10-14T13:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.134424 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.134512 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.134527 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.134544 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.134557 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:06Z","lastTransitionTime":"2025-10-14T13:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.236207 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.236277 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.236292 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.236346 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.236362 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:06Z","lastTransitionTime":"2025-10-14T13:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.339314 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.339423 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.339444 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.339531 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.339550 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:06Z","lastTransitionTime":"2025-10-14T13:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.442230 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.442283 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.442299 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.442317 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.442332 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:06Z","lastTransitionTime":"2025-10-14T13:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.546043 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.546217 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.546313 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.546350 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.546373 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:06Z","lastTransitionTime":"2025-10-14T13:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.649589 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.649696 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.649722 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.649754 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.649778 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:06Z","lastTransitionTime":"2025-10-14T13:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.753110 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.753165 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.753176 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.753197 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.753215 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:06Z","lastTransitionTime":"2025-10-14T13:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.856404 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.856462 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.856472 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.856490 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.856500 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:06Z","lastTransitionTime":"2025-10-14T13:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.920964 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:16:06 crc kubenswrapper[4725]: E1014 13:16:06.921152 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.959575 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.959630 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.959646 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.959672 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:06 crc kubenswrapper[4725]: I1014 13:16:06.959689 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:06Z","lastTransitionTime":"2025-10-14T13:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.061972 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.062032 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.062045 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.062068 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.062081 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:07Z","lastTransitionTime":"2025-10-14T13:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.164855 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.164912 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.164930 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.164957 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.164978 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:07Z","lastTransitionTime":"2025-10-14T13:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.268152 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.268486 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.268643 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.268747 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.268845 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:07Z","lastTransitionTime":"2025-10-14T13:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.372107 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.372143 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.372168 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.372184 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.372194 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:07Z","lastTransitionTime":"2025-10-14T13:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.475593 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.475941 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.476146 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.476377 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.476608 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:07Z","lastTransitionTime":"2025-10-14T13:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.579027 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.579435 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.579644 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.579823 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.579987 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:07Z","lastTransitionTime":"2025-10-14T13:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.683029 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.683325 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.683395 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.683487 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.683565 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:07Z","lastTransitionTime":"2025-10-14T13:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.786434 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.787441 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.787762 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.787858 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.788007 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:07Z","lastTransitionTime":"2025-10-14T13:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.890771 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.890814 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.890823 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.890839 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.890847 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:07Z","lastTransitionTime":"2025-10-14T13:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.920086 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.920167 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.920110 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:16:07 crc kubenswrapper[4725]: E1014 13:16:07.920246 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:16:07 crc kubenswrapper[4725]: E1014 13:16:07.920368 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:16:07 crc kubenswrapper[4725]: E1014 13:16:07.920443 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.993016 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.993058 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.993069 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.993083 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:07 crc kubenswrapper[4725]: I1014 13:16:07.993096 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:07Z","lastTransitionTime":"2025-10-14T13:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.095762 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.095802 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.095812 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.095827 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.095837 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:08Z","lastTransitionTime":"2025-10-14T13:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.198524 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.198595 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.198617 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.198690 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.198718 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:08Z","lastTransitionTime":"2025-10-14T13:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.301092 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.301128 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.301135 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.301150 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.301158 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:08Z","lastTransitionTime":"2025-10-14T13:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.403706 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.403766 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.403780 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.403794 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.403805 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:08Z","lastTransitionTime":"2025-10-14T13:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.506600 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.506640 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.506650 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.506665 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.506675 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:08Z","lastTransitionTime":"2025-10-14T13:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.609090 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.609123 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.609132 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.609146 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.609154 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:08Z","lastTransitionTime":"2025-10-14T13:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.712343 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.712412 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.712431 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.712488 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.712512 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:08Z","lastTransitionTime":"2025-10-14T13:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.816306 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.816386 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.816408 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.816437 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.816513 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:08Z","lastTransitionTime":"2025-10-14T13:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.918780 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.918847 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.918878 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.918908 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.918927 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:08Z","lastTransitionTime":"2025-10-14T13:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:08 crc kubenswrapper[4725]: I1014 13:16:08.920442 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:16:08 crc kubenswrapper[4725]: E1014 13:16:08.920680 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.022218 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.022292 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.022308 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.022331 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.022347 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:09Z","lastTransitionTime":"2025-10-14T13:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.125017 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.125094 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.125111 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.125134 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.125154 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:09Z","lastTransitionTime":"2025-10-14T13:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.228503 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.228581 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.228616 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.228652 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.228674 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:09Z","lastTransitionTime":"2025-10-14T13:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.331829 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.331907 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.331934 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.331967 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.331990 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:09Z","lastTransitionTime":"2025-10-14T13:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.435841 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.435914 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.435933 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.435956 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.435971 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:09Z","lastTransitionTime":"2025-10-14T13:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.539896 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.540051 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.540081 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.540109 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.540131 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:09Z","lastTransitionTime":"2025-10-14T13:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.642914 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.642984 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.643007 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.643035 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.643057 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:09Z","lastTransitionTime":"2025-10-14T13:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.745763 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.745806 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.745817 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.745833 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.745845 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:09Z","lastTransitionTime":"2025-10-14T13:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.848601 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.848670 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.848687 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.848712 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.848728 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:09Z","lastTransitionTime":"2025-10-14T13:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.921145 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.921186 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.921186 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:16:09 crc kubenswrapper[4725]: E1014 13:16:09.921582 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:16:09 crc kubenswrapper[4725]: E1014 13:16:09.921802 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:16:09 crc kubenswrapper[4725]: E1014 13:16:09.921871 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.950952 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.951020 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.951031 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.951104 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.951119 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:09Z","lastTransitionTime":"2025-10-14T13:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.956445 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.956641 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:16:09 crc kubenswrapper[4725]: E1014 13:16:09.956673 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:13.956644741 +0000 UTC m=+150.805079570 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:16:09 crc kubenswrapper[4725]: E1014 13:16:09.956726 4725 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 13:16:09 crc kubenswrapper[4725]: E1014 13:16:09.956776 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 13:17:13.956763634 +0000 UTC m=+150.805198433 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.956797 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.956836 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:16:09 crc kubenswrapper[4725]: I1014 13:16:09.956920 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:16:09 crc kubenswrapper[4725]: E1014 13:16:09.956960 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 13:16:09 crc kubenswrapper[4725]: E1014 13:16:09.956988 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 13:16:09 crc kubenswrapper[4725]: E1014 13:16:09.957004 4725 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:16:09 crc kubenswrapper[4725]: E1014 13:16:09.957012 4725 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 13:16:09 crc kubenswrapper[4725]: E1014 13:16:09.957059 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-14 13:17:13.957043991 +0000 UTC m=+150.805478840 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:16:09 crc kubenswrapper[4725]: E1014 13:16:09.957088 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-14 13:17:13.957075102 +0000 UTC m=+150.805510021 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 14 13:16:09 crc kubenswrapper[4725]: E1014 13:16:09.957216 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 14 13:16:09 crc kubenswrapper[4725]: E1014 13:16:09.957241 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 14 13:16:09 crc kubenswrapper[4725]: E1014 13:16:09.957255 4725 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:16:09 crc kubenswrapper[4725]: E1014 13:16:09.957296 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-14 13:17:13.957282518 +0000 UTC m=+150.805717397 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.053584 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.053624 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.053633 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.053648 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.053658 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:10Z","lastTransitionTime":"2025-10-14T13:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.156703 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.156758 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.156771 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.156787 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.156799 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:10Z","lastTransitionTime":"2025-10-14T13:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.259571 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.259610 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.259619 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.259652 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.259676 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:10Z","lastTransitionTime":"2025-10-14T13:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.362171 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.362199 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.362210 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.362225 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.362234 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:10Z","lastTransitionTime":"2025-10-14T13:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.465164 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.465251 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.465272 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.465297 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.465314 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:10Z","lastTransitionTime":"2025-10-14T13:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.569099 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.569237 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.569781 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.569887 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.569966 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:10Z","lastTransitionTime":"2025-10-14T13:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.672604 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.672673 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.672685 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.672751 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.672765 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:10Z","lastTransitionTime":"2025-10-14T13:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.776875 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.776929 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.776941 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.776988 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.777003 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:10Z","lastTransitionTime":"2025-10-14T13:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.881510 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.881593 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.881613 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.881641 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.881664 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:10Z","lastTransitionTime":"2025-10-14T13:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.921080 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:16:10 crc kubenswrapper[4725]: E1014 13:16:10.921405 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.984977 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.985047 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.985068 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.985098 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:10 crc kubenswrapper[4725]: I1014 13:16:10.985119 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:10Z","lastTransitionTime":"2025-10-14T13:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.088116 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.088155 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.088164 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.088178 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.088189 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:11Z","lastTransitionTime":"2025-10-14T13:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.192300 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.192645 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.192664 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.192692 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.192712 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:11Z","lastTransitionTime":"2025-10-14T13:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.296685 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.296786 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.296810 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.296845 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.296865 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:11Z","lastTransitionTime":"2025-10-14T13:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.401977 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.402057 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.402079 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.402111 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.402133 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:11Z","lastTransitionTime":"2025-10-14T13:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.505359 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.505406 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.505416 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.505433 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.505445 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:11Z","lastTransitionTime":"2025-10-14T13:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.607700 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.607781 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.607800 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.607824 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.607841 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:11Z","lastTransitionTime":"2025-10-14T13:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.710866 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.710920 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.710935 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.710955 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.710975 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:11Z","lastTransitionTime":"2025-10-14T13:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.813531 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.813608 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.813633 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.813662 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.813685 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:11Z","lastTransitionTime":"2025-10-14T13:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.917173 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.917245 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.917267 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.917297 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.917354 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:11Z","lastTransitionTime":"2025-10-14T13:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.920995 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.921038 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:16:11 crc kubenswrapper[4725]: I1014 13:16:11.921010 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:16:11 crc kubenswrapper[4725]: E1014 13:16:11.921146 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:16:11 crc kubenswrapper[4725]: E1014 13:16:11.921328 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:16:11 crc kubenswrapper[4725]: E1014 13:16:11.921543 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.021183 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.021244 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.021256 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.021276 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.021288 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:12Z","lastTransitionTime":"2025-10-14T13:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.124877 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.124951 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.124966 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.124987 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.125003 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:12Z","lastTransitionTime":"2025-10-14T13:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.228712 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.228801 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.228820 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.228853 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.228879 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:12Z","lastTransitionTime":"2025-10-14T13:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.332132 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.332210 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.332239 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.332310 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.332336 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:12Z","lastTransitionTime":"2025-10-14T13:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.435614 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.435660 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.435675 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.435693 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.435712 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:12Z","lastTransitionTime":"2025-10-14T13:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.538788 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.538891 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.538913 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.538939 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.538957 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:12Z","lastTransitionTime":"2025-10-14T13:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.642408 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.642530 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.642550 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.642578 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.642597 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:12Z","lastTransitionTime":"2025-10-14T13:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.746175 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.746253 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.746278 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.746310 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.746335 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:12Z","lastTransitionTime":"2025-10-14T13:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.848802 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.848908 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.848929 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.848947 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.848958 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:12Z","lastTransitionTime":"2025-10-14T13:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.920722 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:16:12 crc kubenswrapper[4725]: E1014 13:16:12.920940 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.952025 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.952124 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.952159 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.952195 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:12 crc kubenswrapper[4725]: I1014 13:16:12.952225 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:12Z","lastTransitionTime":"2025-10-14T13:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.057749 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.057808 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.057819 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.057840 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.057856 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:13Z","lastTransitionTime":"2025-10-14T13:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.160910 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.160985 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.161029 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.161066 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.161091 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:13Z","lastTransitionTime":"2025-10-14T13:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.264577 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.264644 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.264664 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.264685 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.264701 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:13Z","lastTransitionTime":"2025-10-14T13:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.323255 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.323339 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.323367 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.323432 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.323500 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:13Z","lastTransitionTime":"2025-10-14T13:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:13 crc kubenswrapper[4725]: E1014 13:16:13.340275 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:13Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.347425 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.347505 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.347519 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.347543 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.347563 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:13Z","lastTransitionTime":"2025-10-14T13:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:13 crc kubenswrapper[4725]: E1014 13:16:13.370277 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:13Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.375154 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.375257 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.375281 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.375311 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.375330 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:13Z","lastTransitionTime":"2025-10-14T13:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:13 crc kubenswrapper[4725]: E1014 13:16:13.395063 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:13Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.400434 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.400500 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.400514 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.400532 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.400545 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:13Z","lastTransitionTime":"2025-10-14T13:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:13 crc kubenswrapper[4725]: E1014 13:16:13.417366 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:13Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.422161 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.422202 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.422215 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.422230 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.422241 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:13Z","lastTransitionTime":"2025-10-14T13:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:13 crc kubenswrapper[4725]: E1014 13:16:13.439767 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-14T13:16:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d8471f97-cd84-4e08-baca-4ac91f02188a\\\",\\\"systemUUID\\\":\\\"c40d671b-403d-4187-8320-a34d153a3ed0\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:13Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:13 crc kubenswrapper[4725]: E1014 13:16:13.439887 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.448800 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.448842 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.448856 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.448875 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.448886 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:13Z","lastTransitionTime":"2025-10-14T13:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.552324 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.552402 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.552426 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.552498 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.552524 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:13Z","lastTransitionTime":"2025-10-14T13:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.656062 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.656167 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.656194 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.656229 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.656257 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:13Z","lastTransitionTime":"2025-10-14T13:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.759507 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.759567 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.759585 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.759609 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.759627 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:13Z","lastTransitionTime":"2025-10-14T13:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.864266 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.864328 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.864345 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.864370 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.864390 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:13Z","lastTransitionTime":"2025-10-14T13:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.920381 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.920398 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:16:13 crc kubenswrapper[4725]: E1014 13:16:13.920582 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.920422 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:16:13 crc kubenswrapper[4725]: E1014 13:16:13.920706 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:16:13 crc kubenswrapper[4725]: E1014 13:16:13.920767 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.936129 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf13954e4938ed6d4493d155d0d8d3ddd585e161184d1869879f8464fb537bb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:13Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.949367 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa0481199b2dfab1f5e16e57b9e99c70988b1fcdf15877173722a436a0c7cd93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:13Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.964629 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f27c973f-d487-4b38-8921-f9c96635219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f246f81af5a032ce9db26f4f99c6cbcc43eb7ff2674a8de64d385efea9ae280d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac8e2b32e2c1e458b16e2993483a46cca7602eb94ea41432977c3734594ef1e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1754ed0406f4dc8e7cf3d993f0e09d1d164e5f745fbe0f69a162d0e174f1f045\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3443a09977070a469a5fcb2b2844f926da16b9e05466b18073b4cd28e29afb6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74bb130ff8f86bdb45e4d8067742b77da88a4941437427229b92c59bd5e43a7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b69053ed8b28012229cb18a60af3044268c7aa4b8c945ccca823bf6ac6344433\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07d1b043b00eb16d4672e4c8115f9a27373e394851e9b5c13a27202892f8cc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v45p2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-l7nwj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:13Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.967409 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.967514 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.967529 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.967548 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.967561 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:13Z","lastTransitionTime":"2025-10-14T13:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.978499 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba13e395-a6be-4eb2-8bcb-4ebbe8a55b8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3dd5bed9576c9be576b8118eff0593b0eaf3d8fd8614dfbf566906a78f249e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b75fa898729a1cbd704de8ab24f7d7bb470e00eb9f3ee51e23d86222961b81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmpdj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jbldr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:13Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:13 crc kubenswrapper[4725]: I1014 13:16:13.994686 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:13Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.006674 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-n9mfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"768af815-9351-4b60-a9de-9f188049acd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d9cb10af1479948d9b616c0c054bea341c96c4964809f1b208472617a376c8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hn8zz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-n9mfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.021585 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7400c43922418faea2f15eab69dfcdb6d01fd14117423a0ce3de487078acd746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hcldt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-t9hh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.042171 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d54d71-93d1-4cde-940e-a371117f59bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:16:03Z\\\",\\\"message\\\":\\\"g{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-controller-manager/controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.149\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1014 13:16:03.045916 6802 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-n9mfx\\\\nI1014 13:16:03.045925 6802 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-n9mfx in node crc\\\\nI1014 13:16:03.045931 6802 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-n9mfx after 0 failed attempt(s)\\\\nI1014 13:16:03.045938 6802 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-n9mfx\\\\nF1014 13:16:03.045902 6802 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared inform\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:16:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9v9qj_openshift-ovn-kubernetes(38d54d71-93d1-4cde-940e-a371117f59bd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7h8xd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9v9qj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.056610 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8fjcf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db996ea5-a3bf-4db3-a0df-fdd640228c83\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1932e79f25b11a7dd11ee7d1b43139b7b370aac134f24741288e73006992f497\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dnnbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8fjcf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.071043 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.071083 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.071091 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.071151 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.071664 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:14Z","lastTransitionTime":"2025-10-14T13:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.073188 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68ca808e-edcf-41b9-99a5-f10fcbbe6e72\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de7db8a1777567b9c9c186da537b33dd2a68202d45022e33ec691385e647ef5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03a26fe24f459eeb79dbe24b7b637389b9ba47f6880369e92a4c823d216d17b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c401885a70b38cf87c5573364a8d07a79bd451eca6a6263abb37958614e9cf8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24408f31c736057fab41e161d394fe99f6d932020a796afc376013e2164aa209\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24408f31c736057fab41e161d394fe99f6d932020a796afc376013e2164aa209\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.090425 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60823bb2-3f65-441b-968e-bee1ef699eaa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a369d927b570ab87076cac9e54716473a0856899d0f4697b4b30d07a5181c4ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ddcef18a4ca3491877f3ed3e14e43e7a478afef84fe6d9614c5dfd52c25864\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb26ea922d538bc6c5cff6f8d0474cefe723ab38afb6a734a1f412d7591355ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3920ebe7d3eb6b372d24be3b36b16b12c48ea827aae7cf0d00fd707b6681d857\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1c8f13bcd5eec39ffebad70f21ca849695f7c275550eef9b1d74164d67d987\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"message\\\":\\\"le observer\\\\nW1014 13:15:05.624087 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1014 13:15:05.624866 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1014 13:15:05.626194 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3329203933/tls.crt::/tmp/serving-cert-3329203933/tls.key\\\\\\\"\\\\nI1014 13:15:06.092172 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1014 13:15:06.112645 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1014 13:15:06.112671 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1014 13:15:06.112690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1014 13:15:06.112696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1014 13:15:06.126751 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1014 13:15:06.126778 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126783 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1014 13:15:06.126787 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1014 13:15:06.126790 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1014 13:15:06.126793 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1014 13:15:06.126795 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1014 13:15:06.127007 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1014 13:15:06.128605 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://360cb71d40b47c350378789affa9d40d1097c487f19759ceac7dd653211dc523\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f826409f9ab64e47971838a5a229f5fe353705eadcc749e559e610da210ca68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.106694 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.121178 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cxcmw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j8cj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cxcmw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.137991 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55e8261f-c97f-4d46-b31f-6ef737e6c4c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92d8ba988206535f4ad53245b92484c9cfaf86d2a768d171b6a56a3d71ce2855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1c3fa61126ab1de3e82ff31cd2d54b72d6a4355db0dbe16a1300f5349ec032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d76dc71de9a8769dc0878cc796ad53a4355d128d9d3011601d1e10127a390b87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968caf419d5ddbb9213c7325a25516bdec5108f010a0e3457faa716b6f6f8955\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.154591 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"26ba61a0-3a1d-480c-b996-1fee377f8e9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:14:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98123fb205ff3d4c97851b4bb515095342e86389ca41803f84a5a577388fa6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43256276b9c25611f2e8388c421780adc2e4bc57cc94b55622ad09a4d4742f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43256276b9c25611f2e8388c421780adc2e4bc57cc94b55622ad09a4d4742f08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-14T13:14:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-14T13:14:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:14:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.172029 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.173857 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.173903 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.173919 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.173941 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.173956 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:14Z","lastTransitionTime":"2025-10-14T13:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.194206 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c335f15dced5e4240b10bf8cf93f05aaa7954860de085d7d25e465ac49fcc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41ada4aa976fb1be52ba1daea092bc4a477f4fa57a4175db09718886f670cf1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.209005 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbgwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4ed727c-f4d1-47cd-a218-e22803eb1750\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-14T13:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b42368f230c49822144cc9910a09a22f268f840c3540e0a68f7c16659e5061bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-14T13:15:55Z\\\",\\\"message\\\":\\\"2025-10-14T13:15:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_43913cc9-3166-40cc-b691-06541d666bb9\\\\n2025-10-14T13:15:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_43913cc9-3166-40cc-b691-06541d666bb9 to /host/opt/cni/bin/\\\\n2025-10-14T13:15:09Z [verbose] multus-daemon started\\\\n2025-10-14T13:15:09Z [verbose] Readiness Indicator file check\\\\n2025-10-14T13:15:54Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-14T13:15:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-14T13:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gn97f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-14T13:15:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbgwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-14T13:16:14Z is after 2025-08-24T17:21:41Z" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.277528 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.277632 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.277716 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.277789 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.277814 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:14Z","lastTransitionTime":"2025-10-14T13:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.379865 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.379968 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.379990 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.380011 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.380028 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:14Z","lastTransitionTime":"2025-10-14T13:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.482578 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.482631 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.482648 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.482670 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.482689 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:14Z","lastTransitionTime":"2025-10-14T13:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.586326 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.586388 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.586407 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.586430 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.586488 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:14Z","lastTransitionTime":"2025-10-14T13:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.694236 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.694332 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.694376 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.694409 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.694434 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:14Z","lastTransitionTime":"2025-10-14T13:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.799329 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.799392 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.799411 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.799436 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.799485 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:14Z","lastTransitionTime":"2025-10-14T13:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.902140 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.902221 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.902244 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.902271 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.902290 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:14Z","lastTransitionTime":"2025-10-14T13:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:14 crc kubenswrapper[4725]: I1014 13:16:14.921132 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:16:14 crc kubenswrapper[4725]: E1014 13:16:14.921818 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.005143 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.005258 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.005278 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.005303 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.005320 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:15Z","lastTransitionTime":"2025-10-14T13:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.108309 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.108369 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.108383 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.108404 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.108419 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:15Z","lastTransitionTime":"2025-10-14T13:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.211282 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.211359 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.211375 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.211394 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.211407 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:15Z","lastTransitionTime":"2025-10-14T13:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.314523 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.314602 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.314622 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.314649 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.314668 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:15Z","lastTransitionTime":"2025-10-14T13:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.418411 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.418515 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.418533 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.418558 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.418576 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:15Z","lastTransitionTime":"2025-10-14T13:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.521189 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.521228 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.521239 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.521257 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.521268 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:15Z","lastTransitionTime":"2025-10-14T13:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.624065 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.624125 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.624147 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.624179 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.624203 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:15Z","lastTransitionTime":"2025-10-14T13:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.726845 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.726909 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.726971 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.726999 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.727012 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:15Z","lastTransitionTime":"2025-10-14T13:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.830151 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.830201 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.830213 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.830234 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.830246 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:15Z","lastTransitionTime":"2025-10-14T13:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.921109 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.921109 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:16:15 crc kubenswrapper[4725]: E1014 13:16:15.921263 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.921146 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:16:15 crc kubenswrapper[4725]: E1014 13:16:15.921418 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:16:15 crc kubenswrapper[4725]: E1014 13:16:15.921326 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.931741 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.931771 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.931780 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.931794 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:15 crc kubenswrapper[4725]: I1014 13:16:15.931801 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:15Z","lastTransitionTime":"2025-10-14T13:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.034083 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.034135 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.034147 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.034167 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.034180 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:16Z","lastTransitionTime":"2025-10-14T13:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.136715 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.136785 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.136797 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.136811 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.136821 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:16Z","lastTransitionTime":"2025-10-14T13:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.239879 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.239954 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.239978 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.240000 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.240019 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:16Z","lastTransitionTime":"2025-10-14T13:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.342292 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.342331 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.342342 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.342358 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.342369 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:16Z","lastTransitionTime":"2025-10-14T13:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.445755 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.445824 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.445834 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.445852 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.445863 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:16Z","lastTransitionTime":"2025-10-14T13:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.548944 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.549017 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.549030 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.549053 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.549067 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:16Z","lastTransitionTime":"2025-10-14T13:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.652197 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.652274 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.652292 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.652319 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.652338 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:16Z","lastTransitionTime":"2025-10-14T13:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.756198 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.756261 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.756274 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.756293 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.756305 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:16Z","lastTransitionTime":"2025-10-14T13:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.858761 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.858813 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.858826 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.858841 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.858853 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:16Z","lastTransitionTime":"2025-10-14T13:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.920170 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:16:16 crc kubenswrapper[4725]: E1014 13:16:16.920481 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.961943 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.961994 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.962009 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.962033 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:16 crc kubenswrapper[4725]: I1014 13:16:16.962048 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:16Z","lastTransitionTime":"2025-10-14T13:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.064250 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.064337 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.064385 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.064483 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.064504 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:17Z","lastTransitionTime":"2025-10-14T13:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.167891 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.167953 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.167971 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.167995 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.168013 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:17Z","lastTransitionTime":"2025-10-14T13:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.271209 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.271253 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.271265 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.271281 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.271292 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:17Z","lastTransitionTime":"2025-10-14T13:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.374575 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.374623 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.374636 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.374653 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.374663 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:17Z","lastTransitionTime":"2025-10-14T13:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.477815 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.477883 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.477913 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.477944 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.477966 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:17Z","lastTransitionTime":"2025-10-14T13:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.581006 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.581065 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.581307 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.581328 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.581345 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:17Z","lastTransitionTime":"2025-10-14T13:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.684915 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.685073 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.685087 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.685113 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.685134 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:17Z","lastTransitionTime":"2025-10-14T13:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.787672 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.787727 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.787736 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.787752 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.787764 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:17Z","lastTransitionTime":"2025-10-14T13:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.890255 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.890351 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.890359 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.890372 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.890381 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:17Z","lastTransitionTime":"2025-10-14T13:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.920694 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:16:17 crc kubenswrapper[4725]: E1014 13:16:17.920930 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.921654 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.921750 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:16:17 crc kubenswrapper[4725]: E1014 13:16:17.921931 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:16:17 crc kubenswrapper[4725]: E1014 13:16:17.922063 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.991997 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.992069 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.992088 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.992114 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:17 crc kubenswrapper[4725]: I1014 13:16:17.992133 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:17Z","lastTransitionTime":"2025-10-14T13:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.095651 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.095726 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.095780 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.095810 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.095828 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:18Z","lastTransitionTime":"2025-10-14T13:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.198360 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.198401 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.198410 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.198423 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.198433 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:18Z","lastTransitionTime":"2025-10-14T13:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.307855 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.308005 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.308033 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.308063 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.308086 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:18Z","lastTransitionTime":"2025-10-14T13:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.411245 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.411307 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.411327 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.411351 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.411369 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:18Z","lastTransitionTime":"2025-10-14T13:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.514332 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.514380 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.514391 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.514436 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.514463 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:18Z","lastTransitionTime":"2025-10-14T13:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.616381 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.616431 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.616444 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.616510 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.616523 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:18Z","lastTransitionTime":"2025-10-14T13:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.719995 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.720035 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.720044 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.720059 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.720068 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:18Z","lastTransitionTime":"2025-10-14T13:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.822585 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.822631 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.822640 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.822654 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.822665 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:18Z","lastTransitionTime":"2025-10-14T13:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.920368 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:16:18 crc kubenswrapper[4725]: E1014 13:16:18.920603 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.925198 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.925290 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.925312 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.925341 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:18 crc kubenswrapper[4725]: I1014 13:16:18.925359 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:18Z","lastTransitionTime":"2025-10-14T13:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.028348 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.028397 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.028408 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.028426 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.028438 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:19Z","lastTransitionTime":"2025-10-14T13:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.131788 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.131846 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.131863 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.131887 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.131901 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:19Z","lastTransitionTime":"2025-10-14T13:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.234635 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.234698 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.234721 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.234753 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.234771 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:19Z","lastTransitionTime":"2025-10-14T13:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.337911 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.337960 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.337972 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.337991 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.338004 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:19Z","lastTransitionTime":"2025-10-14T13:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.441559 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.441612 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.441624 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.441640 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.441651 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:19Z","lastTransitionTime":"2025-10-14T13:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.544323 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.544385 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.544405 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.544429 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.544446 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:19Z","lastTransitionTime":"2025-10-14T13:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.647430 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.647544 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.647569 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.647601 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.647627 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:19Z","lastTransitionTime":"2025-10-14T13:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.749538 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.749626 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.749656 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.749705 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.749732 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:19Z","lastTransitionTime":"2025-10-14T13:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.852906 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.852970 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.852992 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.853020 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.853050 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:19Z","lastTransitionTime":"2025-10-14T13:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.921082 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:16:19 crc kubenswrapper[4725]: E1014 13:16:19.921280 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.921342 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.921379 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:16:19 crc kubenswrapper[4725]: E1014 13:16:19.921545 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:16:19 crc kubenswrapper[4725]: E1014 13:16:19.921591 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.955178 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.955217 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.955227 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.955240 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:19 crc kubenswrapper[4725]: I1014 13:16:19.955251 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:19Z","lastTransitionTime":"2025-10-14T13:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.058187 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.058217 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.058228 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.058253 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.058265 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:20Z","lastTransitionTime":"2025-10-14T13:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.160282 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.160324 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.160336 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.160355 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.160368 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:20Z","lastTransitionTime":"2025-10-14T13:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.263759 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.263798 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.263809 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.263825 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.263837 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:20Z","lastTransitionTime":"2025-10-14T13:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.374802 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.374833 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.374845 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.374860 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.374872 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:20Z","lastTransitionTime":"2025-10-14T13:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.477563 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.477617 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.477631 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.477647 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.477657 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:20Z","lastTransitionTime":"2025-10-14T13:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.580302 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.580343 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.580355 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.580372 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.580384 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:20Z","lastTransitionTime":"2025-10-14T13:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.683106 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.683155 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.683168 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.683185 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.683199 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:20Z","lastTransitionTime":"2025-10-14T13:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.785492 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.785561 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.785580 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.785604 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.785622 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:20Z","lastTransitionTime":"2025-10-14T13:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.888786 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.888848 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.888866 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.888895 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.888915 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:20Z","lastTransitionTime":"2025-10-14T13:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.920040 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:16:20 crc kubenswrapper[4725]: E1014 13:16:20.920171 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.921239 4725 scope.go:117] "RemoveContainer" containerID="c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9" Oct 14 13:16:20 crc kubenswrapper[4725]: E1014 13:16:20.921712 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9v9qj_openshift-ovn-kubernetes(38d54d71-93d1-4cde-940e-a371117f59bd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.990975 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.991052 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.991073 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.991101 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:20 crc kubenswrapper[4725]: I1014 13:16:20.991119 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:20Z","lastTransitionTime":"2025-10-14T13:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.093395 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.093480 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.093539 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.093563 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.093580 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:21Z","lastTransitionTime":"2025-10-14T13:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.196510 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.196564 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.196575 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.196593 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.196608 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:21Z","lastTransitionTime":"2025-10-14T13:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.298936 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.298987 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.298997 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.299013 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.299024 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:21Z","lastTransitionTime":"2025-10-14T13:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.403649 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.403689 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.403700 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.403716 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.403726 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:21Z","lastTransitionTime":"2025-10-14T13:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.506335 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.506391 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.506404 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.506423 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.506436 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:21Z","lastTransitionTime":"2025-10-14T13:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.609524 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.609587 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.609605 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.609625 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.609638 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:21Z","lastTransitionTime":"2025-10-14T13:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.713169 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.713222 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.713230 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.713257 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.713270 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:21Z","lastTransitionTime":"2025-10-14T13:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.815781 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.815836 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.815848 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.815864 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.815877 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:21Z","lastTransitionTime":"2025-10-14T13:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.918477 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.918736 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.918805 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.918948 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.919098 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:21Z","lastTransitionTime":"2025-10-14T13:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.920885 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:16:21 crc kubenswrapper[4725]: E1014 13:16:21.921017 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.921037 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:16:21 crc kubenswrapper[4725]: I1014 13:16:21.920901 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:16:21 crc kubenswrapper[4725]: E1014 13:16:21.921187 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:16:21 crc kubenswrapper[4725]: E1014 13:16:21.921296 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.022028 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.022067 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.022090 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.022103 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.022112 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:22Z","lastTransitionTime":"2025-10-14T13:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.126114 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.126167 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.126176 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.126192 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.126202 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:22Z","lastTransitionTime":"2025-10-14T13:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.230190 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.230241 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.230256 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.230277 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.230292 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:22Z","lastTransitionTime":"2025-10-14T13:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.332641 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.332699 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.332712 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.332736 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.332750 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:22Z","lastTransitionTime":"2025-10-14T13:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.437004 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.437056 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.437071 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.437092 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.437107 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:22Z","lastTransitionTime":"2025-10-14T13:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.540051 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.540114 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.540125 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.540145 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.540158 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:22Z","lastTransitionTime":"2025-10-14T13:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.643503 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.643559 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.643570 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.643591 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.643604 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:22Z","lastTransitionTime":"2025-10-14T13:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.747188 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.747275 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.747288 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.747309 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.747321 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:22Z","lastTransitionTime":"2025-10-14T13:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.850376 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.850431 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.850442 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.850502 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.850517 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:22Z","lastTransitionTime":"2025-10-14T13:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.920805 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:16:22 crc kubenswrapper[4725]: E1014 13:16:22.920979 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.953351 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.953409 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.953425 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.953444 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:22 crc kubenswrapper[4725]: I1014 13:16:22.953484 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:22Z","lastTransitionTime":"2025-10-14T13:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.056986 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.057047 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.057062 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.057085 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.057100 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:23Z","lastTransitionTime":"2025-10-14T13:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.160489 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.160560 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.160577 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.160603 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.160624 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:23Z","lastTransitionTime":"2025-10-14T13:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.263369 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.263447 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.263501 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.263532 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.263552 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:23Z","lastTransitionTime":"2025-10-14T13:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.366718 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.366784 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.366807 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.366833 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.366856 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:23Z","lastTransitionTime":"2025-10-14T13:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.470123 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.470196 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.470213 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.470242 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.470260 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:23Z","lastTransitionTime":"2025-10-14T13:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.574737 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.574794 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.574811 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.574836 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.574856 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:23Z","lastTransitionTime":"2025-10-14T13:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.588801 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.588852 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.588866 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.588886 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.588904 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T13:16:23Z","lastTransitionTime":"2025-10-14T13:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.639221 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmplt"] Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.640537 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmplt" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.642781 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.643661 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.644554 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.644724 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.699976 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-8fjcf" podStartSLOduration=78.699957178 podStartE2EDuration="1m18.699957178s" podCreationTimestamp="2025-10-14 13:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:16:23.699620819 +0000 UTC m=+100.548055638" watchObservedRunningTime="2025-10-14 13:16:23.699957178 +0000 UTC m=+100.548391987" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.704892 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c421e24c-1e40-498a-9ba1-9f3078351fdd-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vmplt\" (UID: \"c421e24c-1e40-498a-9ba1-9f3078351fdd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmplt" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.705009 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c421e24c-1e40-498a-9ba1-9f3078351fdd-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vmplt\" (UID: \"c421e24c-1e40-498a-9ba1-9f3078351fdd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmplt" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.705063 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c421e24c-1e40-498a-9ba1-9f3078351fdd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vmplt\" (UID: \"c421e24c-1e40-498a-9ba1-9f3078351fdd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmplt" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.705107 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c421e24c-1e40-498a-9ba1-9f3078351fdd-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vmplt\" (UID: \"c421e24c-1e40-498a-9ba1-9f3078351fdd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmplt" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.705202 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c421e24c-1e40-498a-9ba1-9f3078351fdd-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vmplt\" (UID: \"c421e24c-1e40-498a-9ba1-9f3078351fdd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmplt" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.705329 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b-metrics-certs\") pod \"network-metrics-daemon-cxcmw\" (UID: \"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\") " pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:16:23 crc kubenswrapper[4725]: E1014 13:16:23.706320 4725 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 13:16:23 crc kubenswrapper[4725]: E1014 13:16:23.706511 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b-metrics-certs podName:c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:27.706475196 +0000 UTC m=+164.554910055 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b-metrics-certs") pod "network-metrics-daemon-cxcmw" (UID: "c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.734378 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jbldr" podStartSLOduration=77.734353228 podStartE2EDuration="1m17.734353228s" podCreationTimestamp="2025-10-14 13:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:16:23.718792462 +0000 UTC m=+100.567227291" watchObservedRunningTime="2025-10-14 13:16:23.734353228 +0000 UTC m=+100.582788047" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.765326 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-n9mfx" podStartSLOduration=78.765305333 podStartE2EDuration="1m18.765305333s" podCreationTimestamp="2025-10-14 13:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:16:23.747898588 +0000 UTC m=+100.596333447" watchObservedRunningTime="2025-10-14 13:16:23.765305333 +0000 UTC m=+100.613740152" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.779394 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podStartSLOduration=78.779366537 podStartE2EDuration="1m18.779366537s" podCreationTimestamp="2025-10-14 13:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:16:23.765304043 +0000 UTC m=+100.613738852" watchObservedRunningTime="2025-10-14 13:16:23.779366537 +0000 UTC m=+100.627801346" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.806200 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c421e24c-1e40-498a-9ba1-9f3078351fdd-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vmplt\" (UID: \"c421e24c-1e40-498a-9ba1-9f3078351fdd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmplt" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.806283 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c421e24c-1e40-498a-9ba1-9f3078351fdd-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vmplt\" (UID: \"c421e24c-1e40-498a-9ba1-9f3078351fdd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmplt" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.806317 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c421e24c-1e40-498a-9ba1-9f3078351fdd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vmplt\" (UID: \"c421e24c-1e40-498a-9ba1-9f3078351fdd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmplt" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.806373 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c421e24c-1e40-498a-9ba1-9f3078351fdd-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vmplt\" (UID: \"c421e24c-1e40-498a-9ba1-9f3078351fdd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmplt" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.806375 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c421e24c-1e40-498a-9ba1-9f3078351fdd-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vmplt\" (UID: \"c421e24c-1e40-498a-9ba1-9f3078351fdd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmplt" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.806402 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c421e24c-1e40-498a-9ba1-9f3078351fdd-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vmplt\" (UID: \"c421e24c-1e40-498a-9ba1-9f3078351fdd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmplt" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.806508 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c421e24c-1e40-498a-9ba1-9f3078351fdd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vmplt\" (UID: \"c421e24c-1e40-498a-9ba1-9f3078351fdd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmplt" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.807309 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c421e24c-1e40-498a-9ba1-9f3078351fdd-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vmplt\" (UID: \"c421e24c-1e40-498a-9ba1-9f3078351fdd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmplt" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.825150 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c421e24c-1e40-498a-9ba1-9f3078351fdd-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vmplt\" (UID: \"c421e24c-1e40-498a-9ba1-9f3078351fdd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmplt" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.828799 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c421e24c-1e40-498a-9ba1-9f3078351fdd-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vmplt\" (UID: \"c421e24c-1e40-498a-9ba1-9f3078351fdd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmplt" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.840170 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=77.840153766 podStartE2EDuration="1m17.840153766s" podCreationTimestamp="2025-10-14 13:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:16:23.838654005 +0000 UTC m=+100.687088814" watchObservedRunningTime="2025-10-14 13:16:23.840153766 +0000 UTC m=+100.688588575" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.840359 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=43.840353732 podStartE2EDuration="43.840353732s" podCreationTimestamp="2025-10-14 13:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:16:23.794095399 +0000 UTC m=+100.642530218" watchObservedRunningTime="2025-10-14 13:16:23.840353732 +0000 UTC m=+100.688788551" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.890641 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-kbgwl" podStartSLOduration=78.890621724 podStartE2EDuration="1m18.890621724s" podCreationTimestamp="2025-10-14 13:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:16:23.890437279 +0000 UTC m=+100.738872098" watchObservedRunningTime="2025-10-14 13:16:23.890621724 +0000 UTC m=+100.739056533" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.916195 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=72.916169002 podStartE2EDuration="1m12.916169002s" podCreationTimestamp="2025-10-14 13:15:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:16:23.915883664 +0000 UTC m=+100.764318473" watchObservedRunningTime="2025-10-14 13:16:23.916169002 +0000 UTC m=+100.764603811" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.921190 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.921203 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.921229 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:16:23 crc kubenswrapper[4725]: E1014 13:16:23.922781 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:16:23 crc kubenswrapper[4725]: E1014 13:16:23.923185 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:16:23 crc kubenswrapper[4725]: E1014 13:16:23.923394 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.930613 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=18.930592906 podStartE2EDuration="18.930592906s" podCreationTimestamp="2025-10-14 13:16:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:16:23.93001665 +0000 UTC m=+100.778451459" watchObservedRunningTime="2025-10-14 13:16:23.930592906 +0000 UTC m=+100.779027715" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.957358 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-l7nwj" podStartSLOduration=78.957340616 podStartE2EDuration="1m18.957340616s" podCreationTimestamp="2025-10-14 13:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:16:23.95675439 +0000 UTC m=+100.805189219" watchObservedRunningTime="2025-10-14 13:16:23.957340616 +0000 UTC m=+100.805775425" Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.964753 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmplt" Oct 14 13:16:23 crc kubenswrapper[4725]: W1014 13:16:23.978729 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc421e24c_1e40_498a_9ba1_9f3078351fdd.slice/crio-b23d5e42ee03e7024ce98d9b122a7b09301619b7815d49612ff18f2b90acfc21 WatchSource:0}: Error finding container b23d5e42ee03e7024ce98d9b122a7b09301619b7815d49612ff18f2b90acfc21: Status 404 returned error can't find the container with id b23d5e42ee03e7024ce98d9b122a7b09301619b7815d49612ff18f2b90acfc21 Oct 14 13:16:23 crc kubenswrapper[4725]: I1014 13:16:23.986082 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Oct 14 13:16:24 crc kubenswrapper[4725]: I1014 13:16:24.017927 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=1.01790958 podStartE2EDuration="1.01790958s" podCreationTimestamp="2025-10-14 13:16:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:16:24.017314894 +0000 UTC m=+100.865749723" watchObservedRunningTime="2025-10-14 13:16:24.01790958 +0000 UTC m=+100.866344389" Oct 14 13:16:24 crc kubenswrapper[4725]: I1014 13:16:24.501669 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmplt" event={"ID":"c421e24c-1e40-498a-9ba1-9f3078351fdd","Type":"ContainerStarted","Data":"b5ee3a9e2c593e6490776a830eae29b2b381b6312d606d0d5ec466ad97e50d3d"} Oct 14 13:16:24 crc kubenswrapper[4725]: I1014 13:16:24.502048 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmplt" event={"ID":"c421e24c-1e40-498a-9ba1-9f3078351fdd","Type":"ContainerStarted","Data":"b23d5e42ee03e7024ce98d9b122a7b09301619b7815d49612ff18f2b90acfc21"} Oct 14 13:16:24 crc kubenswrapper[4725]: I1014 13:16:24.920962 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:16:24 crc kubenswrapper[4725]: E1014 13:16:24.921137 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:16:25 crc kubenswrapper[4725]: I1014 13:16:25.920865 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:16:25 crc kubenswrapper[4725]: I1014 13:16:25.920902 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:16:25 crc kubenswrapper[4725]: I1014 13:16:25.920865 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:16:25 crc kubenswrapper[4725]: E1014 13:16:25.921280 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:16:25 crc kubenswrapper[4725]: E1014 13:16:25.921353 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:16:25 crc kubenswrapper[4725]: E1014 13:16:25.921415 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:16:26 crc kubenswrapper[4725]: I1014 13:16:26.920216 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:16:26 crc kubenswrapper[4725]: E1014 13:16:26.920372 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:16:27 crc kubenswrapper[4725]: I1014 13:16:27.920605 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:16:27 crc kubenswrapper[4725]: I1014 13:16:27.920630 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:16:27 crc kubenswrapper[4725]: E1014 13:16:27.920835 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:16:27 crc kubenswrapper[4725]: I1014 13:16:27.920647 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:16:27 crc kubenswrapper[4725]: E1014 13:16:27.920943 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:16:27 crc kubenswrapper[4725]: E1014 13:16:27.921082 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:16:28 crc kubenswrapper[4725]: I1014 13:16:28.920622 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:16:28 crc kubenswrapper[4725]: E1014 13:16:28.920775 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:16:29 crc kubenswrapper[4725]: I1014 13:16:29.920777 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:16:29 crc kubenswrapper[4725]: E1014 13:16:29.920952 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:16:29 crc kubenswrapper[4725]: I1014 13:16:29.920775 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:16:29 crc kubenswrapper[4725]: E1014 13:16:29.921399 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:16:29 crc kubenswrapper[4725]: I1014 13:16:29.921613 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:16:29 crc kubenswrapper[4725]: E1014 13:16:29.921922 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:16:30 crc kubenswrapper[4725]: I1014 13:16:30.921439 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:16:30 crc kubenswrapper[4725]: E1014 13:16:30.921923 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:16:31 crc kubenswrapper[4725]: I1014 13:16:31.921093 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:16:31 crc kubenswrapper[4725]: I1014 13:16:31.921356 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:16:31 crc kubenswrapper[4725]: I1014 13:16:31.921402 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:16:31 crc kubenswrapper[4725]: E1014 13:16:31.922154 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:16:31 crc kubenswrapper[4725]: E1014 13:16:31.922025 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:16:31 crc kubenswrapper[4725]: E1014 13:16:31.922555 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:16:32 crc kubenswrapper[4725]: I1014 13:16:32.920591 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:16:32 crc kubenswrapper[4725]: E1014 13:16:32.920725 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:16:33 crc kubenswrapper[4725]: I1014 13:16:33.920653 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:16:33 crc kubenswrapper[4725]: I1014 13:16:33.920912 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:16:33 crc kubenswrapper[4725]: I1014 13:16:33.921109 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:16:33 crc kubenswrapper[4725]: E1014 13:16:33.921918 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:16:33 crc kubenswrapper[4725]: E1014 13:16:33.922127 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:16:33 crc kubenswrapper[4725]: E1014 13:16:33.922318 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:16:34 crc kubenswrapper[4725]: I1014 13:16:34.920262 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:16:34 crc kubenswrapper[4725]: E1014 13:16:34.920473 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:16:35 crc kubenswrapper[4725]: I1014 13:16:35.920948 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:16:35 crc kubenswrapper[4725]: E1014 13:16:35.921083 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:16:35 crc kubenswrapper[4725]: I1014 13:16:35.921264 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:16:35 crc kubenswrapper[4725]: E1014 13:16:35.921320 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:16:35 crc kubenswrapper[4725]: I1014 13:16:35.921476 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:16:35 crc kubenswrapper[4725]: E1014 13:16:35.921525 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:16:35 crc kubenswrapper[4725]: I1014 13:16:35.922231 4725 scope.go:117] "RemoveContainer" containerID="c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9" Oct 14 13:16:35 crc kubenswrapper[4725]: E1014 13:16:35.922366 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9v9qj_openshift-ovn-kubernetes(38d54d71-93d1-4cde-940e-a371117f59bd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" Oct 14 13:16:36 crc kubenswrapper[4725]: I1014 13:16:36.920823 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:16:36 crc kubenswrapper[4725]: E1014 13:16:36.921013 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:16:37 crc kubenswrapper[4725]: I1014 13:16:37.920128 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:16:37 crc kubenswrapper[4725]: I1014 13:16:37.920251 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:16:37 crc kubenswrapper[4725]: E1014 13:16:37.920309 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:16:37 crc kubenswrapper[4725]: E1014 13:16:37.920430 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:16:37 crc kubenswrapper[4725]: I1014 13:16:37.920654 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:16:37 crc kubenswrapper[4725]: E1014 13:16:37.920818 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:16:38 crc kubenswrapper[4725]: I1014 13:16:38.920716 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:16:38 crc kubenswrapper[4725]: E1014 13:16:38.920962 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:16:39 crc kubenswrapper[4725]: I1014 13:16:39.922707 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:16:39 crc kubenswrapper[4725]: E1014 13:16:39.922827 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:16:39 crc kubenswrapper[4725]: I1014 13:16:39.922874 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:16:39 crc kubenswrapper[4725]: E1014 13:16:39.922914 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:16:39 crc kubenswrapper[4725]: I1014 13:16:39.922940 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:16:39 crc kubenswrapper[4725]: E1014 13:16:39.922976 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:16:40 crc kubenswrapper[4725]: I1014 13:16:40.920707 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:16:40 crc kubenswrapper[4725]: E1014 13:16:40.920963 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:16:41 crc kubenswrapper[4725]: I1014 13:16:41.568784 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kbgwl_d4ed727c-f4d1-47cd-a218-e22803eb1750/kube-multus/1.log" Oct 14 13:16:41 crc kubenswrapper[4725]: I1014 13:16:41.569616 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kbgwl_d4ed727c-f4d1-47cd-a218-e22803eb1750/kube-multus/0.log" Oct 14 13:16:41 crc kubenswrapper[4725]: I1014 13:16:41.569708 4725 generic.go:334] "Generic (PLEG): container finished" podID="d4ed727c-f4d1-47cd-a218-e22803eb1750" containerID="b42368f230c49822144cc9910a09a22f268f840c3540e0a68f7c16659e5061bb" exitCode=1 Oct 14 13:16:41 crc kubenswrapper[4725]: I1014 13:16:41.569760 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kbgwl" event={"ID":"d4ed727c-f4d1-47cd-a218-e22803eb1750","Type":"ContainerDied","Data":"b42368f230c49822144cc9910a09a22f268f840c3540e0a68f7c16659e5061bb"} Oct 14 13:16:41 crc kubenswrapper[4725]: I1014 13:16:41.569808 4725 scope.go:117] "RemoveContainer" containerID="e839b3fcf6bd42a4f3c8167cfc3a8ce3d333f519ef8fa12daa8bfb7f795e6bb6" Oct 14 13:16:41 crc kubenswrapper[4725]: I1014 13:16:41.571348 4725 scope.go:117] "RemoveContainer" containerID="b42368f230c49822144cc9910a09a22f268f840c3540e0a68f7c16659e5061bb" Oct 14 13:16:41 crc kubenswrapper[4725]: E1014 13:16:41.571731 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-kbgwl_openshift-multus(d4ed727c-f4d1-47cd-a218-e22803eb1750)\"" pod="openshift-multus/multus-kbgwl" podUID="d4ed727c-f4d1-47cd-a218-e22803eb1750" Oct 14 13:16:41 crc kubenswrapper[4725]: I1014 13:16:41.591183 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vmplt" podStartSLOduration=96.591161989 podStartE2EDuration="1m36.591161989s" podCreationTimestamp="2025-10-14 13:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:16:24.523168636 +0000 UTC m=+101.371603445" watchObservedRunningTime="2025-10-14 13:16:41.591161989 +0000 UTC m=+118.439596808" Oct 14 13:16:41 crc kubenswrapper[4725]: I1014 13:16:41.921277 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:16:41 crc kubenswrapper[4725]: I1014 13:16:41.921407 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:16:41 crc kubenswrapper[4725]: E1014 13:16:41.921504 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:16:41 crc kubenswrapper[4725]: E1014 13:16:41.921709 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:16:41 crc kubenswrapper[4725]: I1014 13:16:41.922059 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:16:41 crc kubenswrapper[4725]: E1014 13:16:41.922484 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:16:42 crc kubenswrapper[4725]: I1014 13:16:42.574886 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kbgwl_d4ed727c-f4d1-47cd-a218-e22803eb1750/kube-multus/1.log" Oct 14 13:16:42 crc kubenswrapper[4725]: I1014 13:16:42.921434 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:16:42 crc kubenswrapper[4725]: E1014 13:16:42.922180 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:16:43 crc kubenswrapper[4725]: I1014 13:16:43.920989 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:16:43 crc kubenswrapper[4725]: E1014 13:16:43.921017 4725 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Oct 14 13:16:43 crc kubenswrapper[4725]: I1014 13:16:43.921055 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:16:43 crc kubenswrapper[4725]: I1014 13:16:43.921154 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:16:43 crc kubenswrapper[4725]: E1014 13:16:43.922652 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:16:43 crc kubenswrapper[4725]: E1014 13:16:43.922791 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:16:43 crc kubenswrapper[4725]: E1014 13:16:43.923034 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:16:44 crc kubenswrapper[4725]: E1014 13:16:44.021917 4725 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 14 13:16:44 crc kubenswrapper[4725]: I1014 13:16:44.920296 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:16:44 crc kubenswrapper[4725]: E1014 13:16:44.920597 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:16:45 crc kubenswrapper[4725]: I1014 13:16:45.920682 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:16:45 crc kubenswrapper[4725]: I1014 13:16:45.920863 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:16:45 crc kubenswrapper[4725]: E1014 13:16:45.921039 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:16:45 crc kubenswrapper[4725]: E1014 13:16:45.920861 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:16:45 crc kubenswrapper[4725]: I1014 13:16:45.920769 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:16:45 crc kubenswrapper[4725]: E1014 13:16:45.921185 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:16:46 crc kubenswrapper[4725]: I1014 13:16:46.921042 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:16:46 crc kubenswrapper[4725]: E1014 13:16:46.921215 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:16:47 crc kubenswrapper[4725]: I1014 13:16:47.920215 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:16:47 crc kubenswrapper[4725]: I1014 13:16:47.920318 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:16:47 crc kubenswrapper[4725]: E1014 13:16:47.920404 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:16:47 crc kubenswrapper[4725]: I1014 13:16:47.920350 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:16:47 crc kubenswrapper[4725]: E1014 13:16:47.920545 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:16:47 crc kubenswrapper[4725]: E1014 13:16:47.920719 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:16:48 crc kubenswrapper[4725]: I1014 13:16:48.920796 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:16:48 crc kubenswrapper[4725]: E1014 13:16:48.921007 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:16:48 crc kubenswrapper[4725]: I1014 13:16:48.921809 4725 scope.go:117] "RemoveContainer" containerID="c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9" Oct 14 13:16:49 crc kubenswrapper[4725]: E1014 13:16:49.023211 4725 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 14 13:16:49 crc kubenswrapper[4725]: I1014 13:16:49.603845 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9v9qj_38d54d71-93d1-4cde-940e-a371117f59bd/ovnkube-controller/3.log" Oct 14 13:16:49 crc kubenswrapper[4725]: I1014 13:16:49.606287 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" event={"ID":"38d54d71-93d1-4cde-940e-a371117f59bd","Type":"ContainerStarted","Data":"c52eeb4bfd39980a5c20ddd0389da2011ca60292fa4c01cfea1d5c9d45d297b4"} Oct 14 13:16:49 crc kubenswrapper[4725]: I1014 13:16:49.606863 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:16:49 crc kubenswrapper[4725]: I1014 13:16:49.634004 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" podStartSLOduration=104.633927851 podStartE2EDuration="1m44.633927851s" podCreationTimestamp="2025-10-14 13:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:16:49.633296164 +0000 UTC m=+126.481730973" watchObservedRunningTime="2025-10-14 13:16:49.633927851 +0000 UTC m=+126.482362690" Oct 14 13:16:49 crc kubenswrapper[4725]: I1014 13:16:49.871576 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-cxcmw"] Oct 14 13:16:49 crc kubenswrapper[4725]: I1014 13:16:49.871737 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:16:49 crc kubenswrapper[4725]: E1014 13:16:49.871850 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:16:49 crc kubenswrapper[4725]: I1014 13:16:49.920178 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:16:49 crc kubenswrapper[4725]: I1014 13:16:49.920259 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:16:49 crc kubenswrapper[4725]: E1014 13:16:49.920334 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:16:49 crc kubenswrapper[4725]: I1014 13:16:49.920394 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:16:49 crc kubenswrapper[4725]: E1014 13:16:49.920505 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:16:49 crc kubenswrapper[4725]: E1014 13:16:49.920609 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:16:50 crc kubenswrapper[4725]: I1014 13:16:50.920904 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:16:50 crc kubenswrapper[4725]: E1014 13:16:50.921764 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:16:51 crc kubenswrapper[4725]: I1014 13:16:51.920793 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:16:51 crc kubenswrapper[4725]: I1014 13:16:51.920805 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:16:51 crc kubenswrapper[4725]: I1014 13:16:51.920927 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:16:51 crc kubenswrapper[4725]: E1014 13:16:51.921083 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:16:51 crc kubenswrapper[4725]: E1014 13:16:51.921129 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:16:51 crc kubenswrapper[4725]: E1014 13:16:51.921004 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:16:52 crc kubenswrapper[4725]: I1014 13:16:52.920308 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:16:52 crc kubenswrapper[4725]: E1014 13:16:52.920494 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:16:53 crc kubenswrapper[4725]: I1014 13:16:53.921390 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:16:53 crc kubenswrapper[4725]: I1014 13:16:53.921585 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:16:53 crc kubenswrapper[4725]: I1014 13:16:53.922357 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:16:53 crc kubenswrapper[4725]: E1014 13:16:53.922350 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:16:53 crc kubenswrapper[4725]: E1014 13:16:53.922474 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:16:53 crc kubenswrapper[4725]: E1014 13:16:53.922633 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:16:54 crc kubenswrapper[4725]: E1014 13:16:54.023668 4725 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 14 13:16:54 crc kubenswrapper[4725]: I1014 13:16:54.920583 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:16:54 crc kubenswrapper[4725]: E1014 13:16:54.921502 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:16:55 crc kubenswrapper[4725]: I1014 13:16:55.920284 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:16:55 crc kubenswrapper[4725]: I1014 13:16:55.920670 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:16:55 crc kubenswrapper[4725]: E1014 13:16:55.920912 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:16:55 crc kubenswrapper[4725]: I1014 13:16:55.921001 4725 scope.go:117] "RemoveContainer" containerID="b42368f230c49822144cc9910a09a22f268f840c3540e0a68f7c16659e5061bb" Oct 14 13:16:55 crc kubenswrapper[4725]: E1014 13:16:55.921067 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:16:55 crc kubenswrapper[4725]: I1014 13:16:55.921387 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:16:55 crc kubenswrapper[4725]: E1014 13:16:55.921510 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:16:56 crc kubenswrapper[4725]: I1014 13:16:56.630845 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kbgwl_d4ed727c-f4d1-47cd-a218-e22803eb1750/kube-multus/1.log" Oct 14 13:16:56 crc kubenswrapper[4725]: I1014 13:16:56.631304 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kbgwl" event={"ID":"d4ed727c-f4d1-47cd-a218-e22803eb1750","Type":"ContainerStarted","Data":"a99474c5c2939852e49d51916f4f54fb8a55b54572012502692bfefcee420f3e"} Oct 14 13:16:56 crc kubenswrapper[4725]: I1014 13:16:56.921151 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:16:56 crc kubenswrapper[4725]: E1014 13:16:56.921341 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:16:57 crc kubenswrapper[4725]: I1014 13:16:57.921014 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:16:57 crc kubenswrapper[4725]: I1014 13:16:57.921051 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:16:57 crc kubenswrapper[4725]: I1014 13:16:57.921014 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:16:57 crc kubenswrapper[4725]: E1014 13:16:57.921155 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 14 13:16:57 crc kubenswrapper[4725]: E1014 13:16:57.921210 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 14 13:16:57 crc kubenswrapper[4725]: E1014 13:16:57.921261 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 14 13:16:58 crc kubenswrapper[4725]: I1014 13:16:58.920992 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:16:58 crc kubenswrapper[4725]: E1014 13:16:58.921566 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cxcmw" podUID="c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b" Oct 14 13:16:59 crc kubenswrapper[4725]: I1014 13:16:59.920466 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:16:59 crc kubenswrapper[4725]: I1014 13:16:59.920488 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:16:59 crc kubenswrapper[4725]: I1014 13:16:59.920722 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:16:59 crc kubenswrapper[4725]: I1014 13:16:59.922923 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Oct 14 13:16:59 crc kubenswrapper[4725]: I1014 13:16:59.923010 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Oct 14 13:16:59 crc kubenswrapper[4725]: I1014 13:16:59.923225 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Oct 14 13:16:59 crc kubenswrapper[4725]: I1014 13:16:59.926884 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Oct 14 13:17:00 crc kubenswrapper[4725]: I1014 13:17:00.920844 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:17:00 crc kubenswrapper[4725]: I1014 13:17:00.923557 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Oct 14 13:17:00 crc kubenswrapper[4725]: I1014 13:17:00.923636 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.651020 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.698980 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-96fdf"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.699518 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-96fdf" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.703887 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vq7zq"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.704298 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-r985j"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.704775 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-r985j" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.705348 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.710260 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.710601 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.710838 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.711266 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.711434 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.718725 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.726421 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.727587 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.727608 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.727634 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.727825 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.727939 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pfnhc"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.728593 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pfnhc" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.729305 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.729393 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.729705 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.729712 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.729860 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.729866 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.731313 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.731722 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.732390 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-tddw7"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.733120 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tddw7" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.735248 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-dfhsw"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.735958 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-dfhsw" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.738948 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.739113 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.739372 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.739397 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.740967 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.742251 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.742403 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.742484 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.743824 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4pqps"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.744771 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-96hd4"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.744805 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4pqps" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.746429 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.748503 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-k8lml"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.749127 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.749258 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k8lml" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.751527 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k4sc6"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.752037 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.752037 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.752251 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k4sc6" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.752330 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.755342 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.755690 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.755828 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.757100 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.758757 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.758850 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.759224 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.759530 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.759785 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.759996 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.760292 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.760506 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.760734 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.760782 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.760782 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.760795 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.761667 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2bfkn"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.762517 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-2bfkn" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.778145 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.779778 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-462vw"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.781909 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.783344 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-462vw" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.797071 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.803915 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s9xxp"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.804616 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s9xxp" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.808269 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.808883 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.809014 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.809274 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.809430 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.809743 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.809839 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.809765 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.810198 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8b79b"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.810373 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.810532 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.810957 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-dbshk"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.811313 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.811352 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.811351 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6brk2"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.811441 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dbshk" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.811589 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.812372 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6brk2" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.811876 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-8b79b" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.811692 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.811747 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.813376 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.817682 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-658kg"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.818535 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-658kg" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.820085 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.820364 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.821137 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.821382 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.821392 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.821760 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.823359 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq2mg"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.824199 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq2mg" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.825414 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mspcl"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.832126 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-dmz29"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.832656 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-w98cd"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.829193 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.832878 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mspcl" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.833123 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dmz29" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.830825 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.830965 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.831127 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.831426 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.831490 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.831588 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.831618 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.834383 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w98cd" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.835823 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-twd2b"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.836118 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.836348 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-twd2b" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.836411 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.837310 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-4v6wb"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.838967 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-4v6wb" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.840277 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.840440 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.840601 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.840811 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.842826 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.843078 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.843580 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htmcq"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.854961 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htmcq" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.856626 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.856694 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.857000 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.857397 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.858123 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.858860 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.866763 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.875627 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.875867 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.878717 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.878973 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.879047 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.882087 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.882632 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9rnn7"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.883393 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.883436 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.883511 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9rnn7" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.883405 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-p8t6g"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.883992 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.884686 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-f9xkt"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.885301 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.885655 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9xkt" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.885900 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.886335 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hb586"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.886641 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.886746 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hb586" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.887416 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-f7scx"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.888053 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.888082 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.889049 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f7scx" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.889315 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-s94lq"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.890592 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-s94lq" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.893608 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftgzg"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.895708 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f6klx"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.895839 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftgzg" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.896580 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9mz9"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.896839 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f6klx" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.897551 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4g6pl"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.898103 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4g6pl" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.898188 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9mz9" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.898860 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-96fdf"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.900592 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6dvkx"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.901293 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6dvkx" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.901816 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340795-6q4fq"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.901966 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.903165 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340795-6q4fq" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.904388 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6pc7v"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.905182 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-6pc7v" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.906531 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-r985j"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.907655 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vq7zq"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.908779 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pfnhc"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.910729 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-dfhsw"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.911144 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4bb4a9f2-74c0-401e-b880-bd17f95b00d2-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-96fdf\" (UID: \"4bb4a9f2-74c0-401e-b880-bd17f95b00d2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-96fdf" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.911180 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.911206 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.911227 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f3ea0115-bde6-42c0-b55f-2bd6d9b68d35-machine-approver-tls\") pod \"machine-approver-56656f9798-tddw7\" (UID: \"f3ea0115-bde6-42c0-b55f-2bd6d9b68d35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tddw7" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.911247 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4wsk\" (UniqueName: \"kubernetes.io/projected/f3ea0115-bde6-42c0-b55f-2bd6d9b68d35-kube-api-access-c4wsk\") pod \"machine-approver-56656f9798-tddw7\" (UID: \"f3ea0115-bde6-42c0-b55f-2bd6d9b68d35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tddw7" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.911265 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bb4a9f2-74c0-401e-b880-bd17f95b00d2-serving-cert\") pod \"controller-manager-879f6c89f-96fdf\" (UID: \"4bb4a9f2-74c0-401e-b880-bd17f95b00d2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-96fdf" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.911371 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1abee1b1-8c1e-43df-89cc-5381a2ef0fc6-config\") pod \"machine-api-operator-5694c8668f-r985j\" (UID: \"1abee1b1-8c1e-43df-89cc-5381a2ef0fc6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r985j" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.911425 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.911495 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1abee1b1-8c1e-43df-89cc-5381a2ef0fc6-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-r985j\" (UID: \"1abee1b1-8c1e-43df-89cc-5381a2ef0fc6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r985j" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.911525 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b3fd5e11-b817-4e76-a744-09eefc35c83b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-k8lml\" (UID: \"b3fd5e11-b817-4e76-a744-09eefc35c83b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k8lml" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.911544 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66pz6\" (UniqueName: \"kubernetes.io/projected/4bb4a9f2-74c0-401e-b880-bd17f95b00d2-kube-api-access-66pz6\") pod \"controller-manager-879f6c89f-96fdf\" (UID: \"4bb4a9f2-74c0-401e-b880-bd17f95b00d2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-96fdf" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.911563 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.911620 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77a019ee-f99c-4962-9032-6493f34adee4-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-k4sc6\" (UID: \"77a019ee-f99c-4962-9032-6493f34adee4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k4sc6" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.911655 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.911675 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3ea0115-bde6-42c0-b55f-2bd6d9b68d35-auth-proxy-config\") pod \"machine-approver-56656f9798-tddw7\" (UID: \"f3ea0115-bde6-42c0-b55f-2bd6d9b68d35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tddw7" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.911694 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ded0ec42-2430-4e32-909c-308aeef7c49a-client-ca\") pod \"route-controller-manager-6576b87f9c-462vw\" (UID: \"ded0ec42-2430-4e32-909c-308aeef7c49a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-462vw" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.911715 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp5tx\" (UniqueName: \"kubernetes.io/projected/77a019ee-f99c-4962-9032-6493f34adee4-kube-api-access-kp5tx\") pod \"openshift-controller-manager-operator-756b6f6bc6-k4sc6\" (UID: \"77a019ee-f99c-4962-9032-6493f34adee4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k4sc6" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.911734 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69z7n\" (UniqueName: \"kubernetes.io/projected/960ecad3-135d-4478-bd6f-b37588dd49bb-kube-api-access-69z7n\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.911771 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.911804 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3ea0115-bde6-42c0-b55f-2bd6d9b68d35-config\") pod \"machine-approver-56656f9798-tddw7\" (UID: \"f3ea0115-bde6-42c0-b55f-2bd6d9b68d35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tddw7" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.911887 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8ca672d-5997-4fe0-b717-64e0b07ffbbe-serving-cert\") pod \"console-operator-58897d9998-dfhsw\" (UID: \"b8ca672d-5997-4fe0-b717-64e0b07ffbbe\") " pod="openshift-console-operator/console-operator-58897d9998-dfhsw" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.911929 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dt2x\" (UniqueName: \"kubernetes.io/projected/604c84ad-7483-438d-9972-a031afc477f6-kube-api-access-4dt2x\") pod \"cluster-samples-operator-665b6dd947-4pqps\" (UID: \"604c84ad-7483-438d-9972-a031afc477f6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4pqps" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.911961 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.911982 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3fd5e11-b817-4e76-a744-09eefc35c83b-serving-cert\") pod \"openshift-config-operator-7777fb866f-k8lml\" (UID: \"b3fd5e11-b817-4e76-a744-09eefc35c83b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k8lml" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.912044 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.912076 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/604c84ad-7483-438d-9972-a031afc477f6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4pqps\" (UID: \"604c84ad-7483-438d-9972-a031afc477f6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4pqps" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.912092 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/960ecad3-135d-4478-bd6f-b37588dd49bb-audit-policies\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.912107 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bb4a9f2-74c0-401e-b880-bd17f95b00d2-config\") pod \"controller-manager-879f6c89f-96fdf\" (UID: \"4bb4a9f2-74c0-401e-b880-bd17f95b00d2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-96fdf" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.912123 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ded0ec42-2430-4e32-909c-308aeef7c49a-config\") pod \"route-controller-manager-6576b87f9c-462vw\" (UID: \"ded0ec42-2430-4e32-909c-308aeef7c49a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-462vw" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.912142 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lv4m\" (UniqueName: \"kubernetes.io/projected/ded0ec42-2430-4e32-909c-308aeef7c49a-kube-api-access-8lv4m\") pod \"route-controller-manager-6576b87f9c-462vw\" (UID: \"ded0ec42-2430-4e32-909c-308aeef7c49a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-462vw" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.912190 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc6hp\" (UniqueName: \"kubernetes.io/projected/b8ca672d-5997-4fe0-b717-64e0b07ffbbe-kube-api-access-hc6hp\") pod \"console-operator-58897d9998-dfhsw\" (UID: \"b8ca672d-5997-4fe0-b717-64e0b07ffbbe\") " pod="openshift-console-operator/console-operator-58897d9998-dfhsw" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.912219 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ded0ec42-2430-4e32-909c-308aeef7c49a-serving-cert\") pod \"route-controller-manager-6576b87f9c-462vw\" (UID: \"ded0ec42-2430-4e32-909c-308aeef7c49a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-462vw" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.912244 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.912356 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4bb4a9f2-74c0-401e-b880-bd17f95b00d2-client-ca\") pod \"controller-manager-879f6c89f-96fdf\" (UID: \"4bb4a9f2-74c0-401e-b880-bd17f95b00d2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-96fdf" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.912439 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1abee1b1-8c1e-43df-89cc-5381a2ef0fc6-images\") pod \"machine-api-operator-5694c8668f-r985j\" (UID: \"1abee1b1-8c1e-43df-89cc-5381a2ef0fc6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r985j" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.912515 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.912554 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hctgk\" (UniqueName: \"kubernetes.io/projected/1abee1b1-8c1e-43df-89cc-5381a2ef0fc6-kube-api-access-hctgk\") pod \"machine-api-operator-5694c8668f-r985j\" (UID: \"1abee1b1-8c1e-43df-89cc-5381a2ef0fc6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r985j" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.912581 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8ca672d-5997-4fe0-b717-64e0b07ffbbe-config\") pod \"console-operator-58897d9998-dfhsw\" (UID: \"b8ca672d-5997-4fe0-b717-64e0b07ffbbe\") " pod="openshift-console-operator/console-operator-58897d9998-dfhsw" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.912608 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/960ecad3-135d-4478-bd6f-b37588dd49bb-audit-dir\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.912625 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77a019ee-f99c-4962-9032-6493f34adee4-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-k4sc6\" (UID: \"77a019ee-f99c-4962-9032-6493f34adee4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k4sc6" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.912642 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8ca672d-5997-4fe0-b717-64e0b07ffbbe-trusted-ca\") pod \"console-operator-58897d9998-dfhsw\" (UID: \"b8ca672d-5997-4fe0-b717-64e0b07ffbbe\") " pod="openshift-console-operator/console-operator-58897d9998-dfhsw" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.912671 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.912703 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26kbc\" (UniqueName: \"kubernetes.io/projected/b3fd5e11-b817-4e76-a744-09eefc35c83b-kube-api-access-26kbc\") pod \"openshift-config-operator-7777fb866f-k8lml\" (UID: \"b3fd5e11-b817-4e76-a744-09eefc35c83b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k8lml" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.912745 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-k8lml"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.913651 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.914913 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8b79b"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.915824 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4pqps"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.916675 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-462vw"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.919841 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-dbshk"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.920836 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-zbjgv"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.924135 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.931799 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6brk2"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.931903 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9rnn7"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.931974 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-4v6wb"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.931991 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hb586"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.932043 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-658kg"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.932043 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-zbjgv" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.944164 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.947257 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftgzg"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.949746 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2bfkn"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.951274 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-dmz29"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.952087 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-f7scx"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.953436 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k4sc6"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.954490 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-96hd4"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.955981 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq2mg"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.957172 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mspcl"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.958279 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-s94lq"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.959613 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-f9xkt"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.960833 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-w98cd"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.962120 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9mz9"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.962345 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.964305 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htmcq"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.965371 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jnvzr"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.966787 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-zjwcj"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.967309 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-jnvzr" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.967361 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-zjwcj" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.968542 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-p8t6g"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.969669 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6pc7v"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.970742 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-zbjgv"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.971785 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340795-6q4fq"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.972940 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s9xxp"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.974003 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f6klx"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.975299 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-zjwcj"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.976425 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6dvkx"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.977521 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jnvzr"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.978667 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4g6pl"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.980909 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-h2rt9"] Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.982440 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Oct 14 13:17:04 crc kubenswrapper[4725]: I1014 13:17:04.982696 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-h2rt9" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.002939 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.013327 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/960ecad3-135d-4478-bd6f-b37588dd49bb-audit-dir\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.013571 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77a019ee-f99c-4962-9032-6493f34adee4-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-k4sc6\" (UID: \"77a019ee-f99c-4962-9032-6493f34adee4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k4sc6" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.013684 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8ca672d-5997-4fe0-b717-64e0b07ffbbe-trusted-ca\") pod \"console-operator-58897d9998-dfhsw\" (UID: \"b8ca672d-5997-4fe0-b717-64e0b07ffbbe\") " pod="openshift-console-operator/console-operator-58897d9998-dfhsw" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.013778 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.013945 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26kbc\" (UniqueName: \"kubernetes.io/projected/b3fd5e11-b817-4e76-a744-09eefc35c83b-kube-api-access-26kbc\") pod \"openshift-config-operator-7777fb866f-k8lml\" (UID: \"b3fd5e11-b817-4e76-a744-09eefc35c83b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k8lml" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.014132 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4bb4a9f2-74c0-401e-b880-bd17f95b00d2-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-96fdf\" (UID: \"4bb4a9f2-74c0-401e-b880-bd17f95b00d2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-96fdf" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.015743 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.015896 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1abee1b1-8c1e-43df-89cc-5381a2ef0fc6-config\") pod \"machine-api-operator-5694c8668f-r985j\" (UID: \"1abee1b1-8c1e-43df-89cc-5381a2ef0fc6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r985j" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.016054 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.016160 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.015690 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4bb4a9f2-74c0-401e-b880-bd17f95b00d2-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-96fdf\" (UID: \"4bb4a9f2-74c0-401e-b880-bd17f95b00d2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-96fdf" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.014698 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77a019ee-f99c-4962-9032-6493f34adee4-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-k4sc6\" (UID: \"77a019ee-f99c-4962-9032-6493f34adee4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k4sc6" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.014722 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.016235 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f3ea0115-bde6-42c0-b55f-2bd6d9b68d35-machine-approver-tls\") pod \"machine-approver-56656f9798-tddw7\" (UID: \"f3ea0115-bde6-42c0-b55f-2bd6d9b68d35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tddw7" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.016369 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4wsk\" (UniqueName: \"kubernetes.io/projected/f3ea0115-bde6-42c0-b55f-2bd6d9b68d35-kube-api-access-c4wsk\") pod \"machine-approver-56656f9798-tddw7\" (UID: \"f3ea0115-bde6-42c0-b55f-2bd6d9b68d35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tddw7" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.016395 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bb4a9f2-74c0-401e-b880-bd17f95b00d2-serving-cert\") pod \"controller-manager-879f6c89f-96fdf\" (UID: \"4bb4a9f2-74c0-401e-b880-bd17f95b00d2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-96fdf" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.016419 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1abee1b1-8c1e-43df-89cc-5381a2ef0fc6-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-r985j\" (UID: \"1abee1b1-8c1e-43df-89cc-5381a2ef0fc6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r985j" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.016442 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b3fd5e11-b817-4e76-a744-09eefc35c83b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-k8lml\" (UID: \"b3fd5e11-b817-4e76-a744-09eefc35c83b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k8lml" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.016478 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66pz6\" (UniqueName: \"kubernetes.io/projected/4bb4a9f2-74c0-401e-b880-bd17f95b00d2-kube-api-access-66pz6\") pod \"controller-manager-879f6c89f-96fdf\" (UID: \"4bb4a9f2-74c0-401e-b880-bd17f95b00d2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-96fdf" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.016509 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.016538 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.016564 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77a019ee-f99c-4962-9032-6493f34adee4-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-k4sc6\" (UID: \"77a019ee-f99c-4962-9032-6493f34adee4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k4sc6" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.016598 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3ea0115-bde6-42c0-b55f-2bd6d9b68d35-auth-proxy-config\") pod \"machine-approver-56656f9798-tddw7\" (UID: \"f3ea0115-bde6-42c0-b55f-2bd6d9b68d35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tddw7" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.016629 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69z7n\" (UniqueName: \"kubernetes.io/projected/960ecad3-135d-4478-bd6f-b37588dd49bb-kube-api-access-69z7n\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.016652 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ded0ec42-2430-4e32-909c-308aeef7c49a-client-ca\") pod \"route-controller-manager-6576b87f9c-462vw\" (UID: \"ded0ec42-2430-4e32-909c-308aeef7c49a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-462vw" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.016677 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp5tx\" (UniqueName: \"kubernetes.io/projected/77a019ee-f99c-4962-9032-6493f34adee4-kube-api-access-kp5tx\") pod \"openshift-controller-manager-operator-756b6f6bc6-k4sc6\" (UID: \"77a019ee-f99c-4962-9032-6493f34adee4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k4sc6" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.016699 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.013427 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/960ecad3-135d-4478-bd6f-b37588dd49bb-audit-dir\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.016737 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3ea0115-bde6-42c0-b55f-2bd6d9b68d35-config\") pod \"machine-approver-56656f9798-tddw7\" (UID: \"f3ea0115-bde6-42c0-b55f-2bd6d9b68d35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tddw7" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.016881 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8ca672d-5997-4fe0-b717-64e0b07ffbbe-serving-cert\") pod \"console-operator-58897d9998-dfhsw\" (UID: \"b8ca672d-5997-4fe0-b717-64e0b07ffbbe\") " pod="openshift-console-operator/console-operator-58897d9998-dfhsw" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.016931 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8ca672d-5997-4fe0-b717-64e0b07ffbbe-trusted-ca\") pod \"console-operator-58897d9998-dfhsw\" (UID: \"b8ca672d-5997-4fe0-b717-64e0b07ffbbe\") " pod="openshift-console-operator/console-operator-58897d9998-dfhsw" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.016963 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dt2x\" (UniqueName: \"kubernetes.io/projected/604c84ad-7483-438d-9972-a031afc477f6-kube-api-access-4dt2x\") pod \"cluster-samples-operator-665b6dd947-4pqps\" (UID: \"604c84ad-7483-438d-9972-a031afc477f6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4pqps" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.017004 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.017031 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3fd5e11-b817-4e76-a744-09eefc35c83b-serving-cert\") pod \"openshift-config-operator-7777fb866f-k8lml\" (UID: \"b3fd5e11-b817-4e76-a744-09eefc35c83b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k8lml" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.017113 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.017148 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/960ecad3-135d-4478-bd6f-b37588dd49bb-audit-policies\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.017179 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bb4a9f2-74c0-401e-b880-bd17f95b00d2-config\") pod \"controller-manager-879f6c89f-96fdf\" (UID: \"4bb4a9f2-74c0-401e-b880-bd17f95b00d2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-96fdf" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.017208 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/604c84ad-7483-438d-9972-a031afc477f6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4pqps\" (UID: \"604c84ad-7483-438d-9972-a031afc477f6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4pqps" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.017239 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ded0ec42-2430-4e32-909c-308aeef7c49a-config\") pod \"route-controller-manager-6576b87f9c-462vw\" (UID: \"ded0ec42-2430-4e32-909c-308aeef7c49a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-462vw" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.017243 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3ea0115-bde6-42c0-b55f-2bd6d9b68d35-config\") pod \"machine-approver-56656f9798-tddw7\" (UID: \"f3ea0115-bde6-42c0-b55f-2bd6d9b68d35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tddw7" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.017265 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lv4m\" (UniqueName: \"kubernetes.io/projected/ded0ec42-2430-4e32-909c-308aeef7c49a-kube-api-access-8lv4m\") pod \"route-controller-manager-6576b87f9c-462vw\" (UID: \"ded0ec42-2430-4e32-909c-308aeef7c49a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-462vw" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.017299 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc6hp\" (UniqueName: \"kubernetes.io/projected/b8ca672d-5997-4fe0-b717-64e0b07ffbbe-kube-api-access-hc6hp\") pod \"console-operator-58897d9998-dfhsw\" (UID: \"b8ca672d-5997-4fe0-b717-64e0b07ffbbe\") " pod="openshift-console-operator/console-operator-58897d9998-dfhsw" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.017331 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ded0ec42-2430-4e32-909c-308aeef7c49a-serving-cert\") pod \"route-controller-manager-6576b87f9c-462vw\" (UID: \"ded0ec42-2430-4e32-909c-308aeef7c49a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-462vw" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.017371 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.017406 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1abee1b1-8c1e-43df-89cc-5381a2ef0fc6-images\") pod \"machine-api-operator-5694c8668f-r985j\" (UID: \"1abee1b1-8c1e-43df-89cc-5381a2ef0fc6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r985j" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.017433 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.017481 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4bb4a9f2-74c0-401e-b880-bd17f95b00d2-client-ca\") pod \"controller-manager-879f6c89f-96fdf\" (UID: \"4bb4a9f2-74c0-401e-b880-bd17f95b00d2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-96fdf" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.017540 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hctgk\" (UniqueName: \"kubernetes.io/projected/1abee1b1-8c1e-43df-89cc-5381a2ef0fc6-kube-api-access-hctgk\") pod \"machine-api-operator-5694c8668f-r985j\" (UID: \"1abee1b1-8c1e-43df-89cc-5381a2ef0fc6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r985j" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.017569 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8ca672d-5997-4fe0-b717-64e0b07ffbbe-config\") pod \"console-operator-58897d9998-dfhsw\" (UID: \"b8ca672d-5997-4fe0-b717-64e0b07ffbbe\") " pod="openshift-console-operator/console-operator-58897d9998-dfhsw" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.018284 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b3fd5e11-b817-4e76-a744-09eefc35c83b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-k8lml\" (UID: \"b3fd5e11-b817-4e76-a744-09eefc35c83b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k8lml" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.018325 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.018515 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8ca672d-5997-4fe0-b717-64e0b07ffbbe-config\") pod \"console-operator-58897d9998-dfhsw\" (UID: \"b8ca672d-5997-4fe0-b717-64e0b07ffbbe\") " pod="openshift-console-operator/console-operator-58897d9998-dfhsw" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.018635 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ded0ec42-2430-4e32-909c-308aeef7c49a-client-ca\") pod \"route-controller-manager-6576b87f9c-462vw\" (UID: \"ded0ec42-2430-4e32-909c-308aeef7c49a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-462vw" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.018893 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3ea0115-bde6-42c0-b55f-2bd6d9b68d35-auth-proxy-config\") pod \"machine-approver-56656f9798-tddw7\" (UID: \"f3ea0115-bde6-42c0-b55f-2bd6d9b68d35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tddw7" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.019619 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/960ecad3-135d-4478-bd6f-b37588dd49bb-audit-policies\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.017428 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1abee1b1-8c1e-43df-89cc-5381a2ef0fc6-config\") pod \"machine-api-operator-5694c8668f-r985j\" (UID: \"1abee1b1-8c1e-43df-89cc-5381a2ef0fc6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r985j" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.020403 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.021240 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bb4a9f2-74c0-401e-b880-bd17f95b00d2-config\") pod \"controller-manager-879f6c89f-96fdf\" (UID: \"4bb4a9f2-74c0-401e-b880-bd17f95b00d2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-96fdf" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.021308 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ded0ec42-2430-4e32-909c-308aeef7c49a-config\") pod \"route-controller-manager-6576b87f9c-462vw\" (UID: \"ded0ec42-2430-4e32-909c-308aeef7c49a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-462vw" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.021545 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1abee1b1-8c1e-43df-89cc-5381a2ef0fc6-images\") pod \"machine-api-operator-5694c8668f-r985j\" (UID: \"1abee1b1-8c1e-43df-89cc-5381a2ef0fc6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r985j" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.022610 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.023701 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77a019ee-f99c-4962-9032-6493f34adee4-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-k4sc6\" (UID: \"77a019ee-f99c-4962-9032-6493f34adee4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k4sc6" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.023572 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bb4a9f2-74c0-401e-b880-bd17f95b00d2-serving-cert\") pod \"controller-manager-879f6c89f-96fdf\" (UID: \"4bb4a9f2-74c0-401e-b880-bd17f95b00d2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-96fdf" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.022932 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4bb4a9f2-74c0-401e-b880-bd17f95b00d2-client-ca\") pod \"controller-manager-879f6c89f-96fdf\" (UID: \"4bb4a9f2-74c0-401e-b880-bd17f95b00d2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-96fdf" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.024008 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8ca672d-5997-4fe0-b717-64e0b07ffbbe-serving-cert\") pod \"console-operator-58897d9998-dfhsw\" (UID: \"b8ca672d-5997-4fe0-b717-64e0b07ffbbe\") " pod="openshift-console-operator/console-operator-58897d9998-dfhsw" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.025665 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.025750 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ded0ec42-2430-4e32-909c-308aeef7c49a-serving-cert\") pod \"route-controller-manager-6576b87f9c-462vw\" (UID: \"ded0ec42-2430-4e32-909c-308aeef7c49a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-462vw" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.025773 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.026334 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.026429 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.026428 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.026547 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/604c84ad-7483-438d-9972-a031afc477f6-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4pqps\" (UID: \"604c84ad-7483-438d-9972-a031afc477f6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4pqps" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.027024 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1abee1b1-8c1e-43df-89cc-5381a2ef0fc6-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-r985j\" (UID: \"1abee1b1-8c1e-43df-89cc-5381a2ef0fc6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r985j" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.027302 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f3ea0115-bde6-42c0-b55f-2bd6d9b68d35-machine-approver-tls\") pod \"machine-approver-56656f9798-tddw7\" (UID: \"f3ea0115-bde6-42c0-b55f-2bd6d9b68d35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tddw7" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.027410 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3fd5e11-b817-4e76-a744-09eefc35c83b-serving-cert\") pod \"openshift-config-operator-7777fb866f-k8lml\" (UID: \"b3fd5e11-b817-4e76-a744-09eefc35c83b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k8lml" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.028808 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.029017 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.029584 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.062352 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.082868 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.102403 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.122059 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.142191 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.161804 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.182474 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.202431 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.222864 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.252041 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.261738 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.282819 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.302752 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.323250 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.343376 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.362177 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.383134 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.402923 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.422968 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.442747 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.463207 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.503260 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.523644 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.524339 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-bound-sa-token\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.524487 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-trusted-ca\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.524564 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg7w2\" (UniqueName: \"kubernetes.io/projected/9a1f4665-bd0e-4e79-948e-1c1894945013-kube-api-access-gg7w2\") pod \"apiserver-7bbb656c7d-zfnkd\" (UID: \"9a1f4665-bd0e-4e79-948e-1c1894945013\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.524612 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a1f4665-bd0e-4e79-948e-1c1894945013-serving-cert\") pod \"apiserver-7bbb656c7d-zfnkd\" (UID: \"9a1f4665-bd0e-4e79-948e-1c1894945013\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.524688 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb0a83ee-f088-424f-b3c6-8ac8e2a50a27-serving-cert\") pod \"etcd-operator-b45778765-2bfkn\" (UID: \"eb0a83ee-f088-424f-b3c6-8ac8e2a50a27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2bfkn" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.524732 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-registry-tls\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.524772 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-registry-certificates\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.524824 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9a1f4665-bd0e-4e79-948e-1c1894945013-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-zfnkd\" (UID: \"9a1f4665-bd0e-4e79-948e-1c1894945013\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.524871 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.524913 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.524979 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9a1f4665-bd0e-4e79-948e-1c1894945013-etcd-client\") pod \"apiserver-7bbb656c7d-zfnkd\" (UID: \"9a1f4665-bd0e-4e79-948e-1c1894945013\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.525018 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/eb0a83ee-f088-424f-b3c6-8ac8e2a50a27-etcd-ca\") pod \"etcd-operator-b45778765-2bfkn\" (UID: \"eb0a83ee-f088-424f-b3c6-8ac8e2a50a27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2bfkn" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.525062 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thnr2\" (UniqueName: \"kubernetes.io/projected/eb0a83ee-f088-424f-b3c6-8ac8e2a50a27-kube-api-access-thnr2\") pod \"etcd-operator-b45778765-2bfkn\" (UID: \"eb0a83ee-f088-424f-b3c6-8ac8e2a50a27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2bfkn" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.525137 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.525207 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/46aacb90-0d57-4080-96db-5e477c100fe8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-s9xxp\" (UID: \"46aacb90-0d57-4080-96db-5e477c100fe8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s9xxp" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.525271 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/46aacb90-0d57-4080-96db-5e477c100fe8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-s9xxp\" (UID: \"46aacb90-0d57-4080-96db-5e477c100fe8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s9xxp" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.525314 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e477098-5f8a-4194-9125-806a2d8724ce-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pfnhc\" (UID: \"1e477098-5f8a-4194-9125-806a2d8724ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pfnhc" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.525358 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdn7m\" (UniqueName: \"kubernetes.io/projected/1e477098-5f8a-4194-9125-806a2d8724ce-kube-api-access-bdn7m\") pod \"authentication-operator-69f744f599-pfnhc\" (UID: \"1e477098-5f8a-4194-9125-806a2d8724ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pfnhc" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.525409 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a1f4665-bd0e-4e79-948e-1c1894945013-audit-policies\") pod \"apiserver-7bbb656c7d-zfnkd\" (UID: \"9a1f4665-bd0e-4e79-948e-1c1894945013\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.525481 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e477098-5f8a-4194-9125-806a2d8724ce-serving-cert\") pod \"authentication-operator-69f744f599-pfnhc\" (UID: \"1e477098-5f8a-4194-9125-806a2d8724ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pfnhc" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.525528 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e477098-5f8a-4194-9125-806a2d8724ce-service-ca-bundle\") pod \"authentication-operator-69f744f599-pfnhc\" (UID: \"1e477098-5f8a-4194-9125-806a2d8724ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pfnhc" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.525588 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a1f4665-bd0e-4e79-948e-1c1894945013-audit-dir\") pod \"apiserver-7bbb656c7d-zfnkd\" (UID: \"9a1f4665-bd0e-4e79-948e-1c1894945013\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.525648 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/46aacb90-0d57-4080-96db-5e477c100fe8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-s9xxp\" (UID: \"46aacb90-0d57-4080-96db-5e477c100fe8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s9xxp" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.525770 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a1f4665-bd0e-4e79-948e-1c1894945013-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-zfnkd\" (UID: \"9a1f4665-bd0e-4e79-948e-1c1894945013\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.525849 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm47q\" (UniqueName: \"kubernetes.io/projected/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-kube-api-access-jm47q\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.525938 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/eb0a83ee-f088-424f-b3c6-8ac8e2a50a27-etcd-service-ca\") pod \"etcd-operator-b45778765-2bfkn\" (UID: \"eb0a83ee-f088-424f-b3c6-8ac8e2a50a27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2bfkn" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.526035 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/eb0a83ee-f088-424f-b3c6-8ac8e2a50a27-etcd-client\") pod \"etcd-operator-b45778765-2bfkn\" (UID: \"eb0a83ee-f088-424f-b3c6-8ac8e2a50a27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2bfkn" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.526142 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw8l8\" (UniqueName: \"kubernetes.io/projected/46aacb90-0d57-4080-96db-5e477c100fe8-kube-api-access-qw8l8\") pod \"cluster-image-registry-operator-dc59b4c8b-s9xxp\" (UID: \"46aacb90-0d57-4080-96db-5e477c100fe8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s9xxp" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.526225 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb0a83ee-f088-424f-b3c6-8ac8e2a50a27-config\") pod \"etcd-operator-b45778765-2bfkn\" (UID: \"eb0a83ee-f088-424f-b3c6-8ac8e2a50a27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2bfkn" Oct 14 13:17:05 crc kubenswrapper[4725]: E1014 13:17:05.526332 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:06.026302014 +0000 UTC m=+142.874736993 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.526732 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9a1f4665-bd0e-4e79-948e-1c1894945013-encryption-config\") pod \"apiserver-7bbb656c7d-zfnkd\" (UID: \"9a1f4665-bd0e-4e79-948e-1c1894945013\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.526835 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e477098-5f8a-4194-9125-806a2d8724ce-config\") pod \"authentication-operator-69f744f599-pfnhc\" (UID: \"1e477098-5f8a-4194-9125-806a2d8724ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pfnhc" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.542613 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.562684 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.582961 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.603437 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.622749 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.628312 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:05 crc kubenswrapper[4725]: E1014 13:17:05.628478 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:06.128430344 +0000 UTC m=+142.976865153 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.628750 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9a1f4665-bd0e-4e79-948e-1c1894945013-etcd-client\") pod \"apiserver-7bbb656c7d-zfnkd\" (UID: \"9a1f4665-bd0e-4e79-948e-1c1894945013\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.628796 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/eb0a83ee-f088-424f-b3c6-8ac8e2a50a27-etcd-ca\") pod \"etcd-operator-b45778765-2bfkn\" (UID: \"eb0a83ee-f088-424f-b3c6-8ac8e2a50a27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2bfkn" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.628833 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/10813d7e-3ed3-49a7-a2ad-5aa0db76a25d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-658kg\" (UID: \"10813d7e-3ed3-49a7-a2ad-5aa0db76a25d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-658kg" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.628948 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4432b354-68b6-461c-98f9-11651a4ec51a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-s94lq\" (UID: \"4432b354-68b6-461c-98f9-11651a4ec51a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-s94lq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.629029 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/42497b95-3bd9-480d-9393-db14108c977e-audit\") pod \"apiserver-76f77b778f-p8t6g\" (UID: \"42497b95-3bd9-480d-9393-db14108c977e\") " pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.629064 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/42497b95-3bd9-480d-9393-db14108c977e-etcd-serving-ca\") pod \"apiserver-76f77b778f-p8t6g\" (UID: \"42497b95-3bd9-480d-9393-db14108c977e\") " pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.629130 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d54b2ace-6596-4bcf-88cb-23f381105f80-srv-cert\") pod \"catalog-operator-68c6474976-f6klx\" (UID: \"d54b2ace-6596-4bcf-88cb-23f381105f80\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f6klx" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.629179 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44854d9f-c150-47e2-b099-66e7a7f483e2-serving-cert\") pod \"service-ca-operator-777779d784-6dvkx\" (UID: \"44854d9f-c150-47e2-b099-66e7a7f483e2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6dvkx" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.629244 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/490a96f8-3a20-414a-b664-c2df9a8d373f-webhook-cert\") pod \"packageserver-d55dfcdfc-ftgzg\" (UID: \"490a96f8-3a20-414a-b664-c2df9a8d373f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftgzg" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.629292 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23ffd8bd-e6ca-4824-900e-08ba0cd80041-metrics-certs\") pod \"router-default-5444994796-twd2b\" (UID: \"23ffd8bd-e6ca-4824-900e-08ba0cd80041\") " pod="openshift-ingress/router-default-5444994796-twd2b" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.629317 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ddb4cc6f-047d-4acb-ad5c-b216f5ceb8ce-node-bootstrap-token\") pod \"machine-config-server-h2rt9\" (UID: \"ddb4cc6f-047d-4acb-ad5c-b216f5ceb8ce\") " pod="openshift-machine-config-operator/machine-config-server-h2rt9" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.629350 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsr6n\" (UniqueName: \"kubernetes.io/projected/64be0777-3e55-42b0-8832-8c58f1980f27-kube-api-access-gsr6n\") pod \"package-server-manager-789f6589d5-b9mz9\" (UID: \"64be0777-3e55-42b0-8832-8c58f1980f27\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9mz9" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.629512 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b372f112-486d-4848-a78f-552a485abacc-csi-data-dir\") pod \"csi-hostpathplugin-jnvzr\" (UID: \"b372f112-486d-4848-a78f-552a485abacc\") " pod="hostpath-provisioner/csi-hostpathplugin-jnvzr" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.629657 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.629710 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49slr\" (UniqueName: \"kubernetes.io/projected/f19f06ec-8bbf-4d8e-a6ef-f11e032454a9-kube-api-access-49slr\") pod \"service-ca-9c57cc56f-6pc7v\" (UID: \"f19f06ec-8bbf-4d8e-a6ef-f11e032454a9\") " pod="openshift-service-ca/service-ca-9c57cc56f-6pc7v" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.629790 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qthm8\" (UniqueName: \"kubernetes.io/projected/e1e76f4a-94bf-473d-9658-be90b7f79e56-kube-api-access-qthm8\") pod \"dns-operator-744455d44c-8b79b\" (UID: \"e1e76f4a-94bf-473d-9658-be90b7f79e56\") " pod="openshift-dns-operator/dns-operator-744455d44c-8b79b" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.629828 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d4f5074-61bb-4e5b-91ce-d8149ddb50f1-config-volume\") pod \"collect-profiles-29340795-6q4fq\" (UID: \"9d4f5074-61bb-4e5b-91ce-d8149ddb50f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340795-6q4fq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.629867 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e477098-5f8a-4194-9125-806a2d8724ce-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pfnhc\" (UID: \"1e477098-5f8a-4194-9125-806a2d8724ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pfnhc" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.629904 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-oauth-serving-cert\") pod \"console-f9d7485db-dbshk\" (UID: \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\") " pod="openshift-console/console-f9d7485db-dbshk" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.629942 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a1f4665-bd0e-4e79-948e-1c1894945013-audit-policies\") pod \"apiserver-7bbb656c7d-zfnkd\" (UID: \"9a1f4665-bd0e-4e79-948e-1c1894945013\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.630007 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e477098-5f8a-4194-9125-806a2d8724ce-serving-cert\") pod \"authentication-operator-69f744f599-pfnhc\" (UID: \"1e477098-5f8a-4194-9125-806a2d8724ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pfnhc" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.630046 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn5nq\" (UniqueName: \"kubernetes.io/projected/10813d7e-3ed3-49a7-a2ad-5aa0db76a25d-kube-api-access-zn5nq\") pod \"control-plane-machine-set-operator-78cbb6b69f-658kg\" (UID: \"10813d7e-3ed3-49a7-a2ad-5aa0db76a25d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-658kg" Oct 14 13:17:05 crc kubenswrapper[4725]: E1014 13:17:05.630059 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:06.130040339 +0000 UTC m=+142.978475148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.630087 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7148d1c-3586-4dae-a72d-543940574d2e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9rnn7\" (UID: \"a7148d1c-3586-4dae-a72d-543940574d2e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9rnn7" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.630110 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/29bc7b45-9968-48c5-be01-2c8b7f39df13-metrics-tls\") pod \"ingress-operator-5b745b69d9-w98cd\" (UID: \"29bc7b45-9968-48c5-be01-2c8b7f39df13\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w98cd" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.630139 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a1f4665-bd0e-4e79-948e-1c1894945013-audit-dir\") pod \"apiserver-7bbb656c7d-zfnkd\" (UID: \"9a1f4665-bd0e-4e79-948e-1c1894945013\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.630157 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4357984b-72f2-4c52-bae4-1dce4616b0df-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mspcl\" (UID: \"4357984b-72f2-4c52-bae4-1dce4616b0df\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mspcl" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.630176 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/46aacb90-0d57-4080-96db-5e477c100fe8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-s9xxp\" (UID: \"46aacb90-0d57-4080-96db-5e477c100fe8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s9xxp" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.630195 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/218135fe-157d-49e2-b391-acf0af7fdc3e-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-htmcq\" (UID: \"218135fe-157d-49e2-b391-acf0af7fdc3e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htmcq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.630216 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5zp7\" (UniqueName: \"kubernetes.io/projected/d1a46958-489a-4357-adc6-ad2990dc19cd-kube-api-access-r5zp7\") pod \"olm-operator-6b444d44fb-4g6pl\" (UID: \"d1a46958-489a-4357-adc6-ad2990dc19cd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4g6pl" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.630238 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a1f4665-bd0e-4e79-948e-1c1894945013-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-zfnkd\" (UID: \"9a1f4665-bd0e-4e79-948e-1c1894945013\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.630258 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-trusted-ca-bundle\") pod \"console-f9d7485db-dbshk\" (UID: \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\") " pod="openshift-console/console-f9d7485db-dbshk" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.630291 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/eb0a83ee-f088-424f-b3c6-8ac8e2a50a27-etcd-client\") pod \"etcd-operator-b45778765-2bfkn\" (UID: \"eb0a83ee-f088-424f-b3c6-8ac8e2a50a27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2bfkn" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.630311 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b372f112-486d-4848-a78f-552a485abacc-socket-dir\") pod \"csi-hostpathplugin-jnvzr\" (UID: \"b372f112-486d-4848-a78f-552a485abacc\") " pod="hostpath-provisioner/csi-hostpathplugin-jnvzr" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.630330 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp7lp\" (UniqueName: \"kubernetes.io/projected/3fd3f30a-c1ac-4a8f-8d95-9d4165d608a2-kube-api-access-xp7lp\") pod \"downloads-7954f5f757-4v6wb\" (UID: \"3fd3f30a-c1ac-4a8f-8d95-9d4165d608a2\") " pod="openshift-console/downloads-7954f5f757-4v6wb" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.630347 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/490a96f8-3a20-414a-b664-c2df9a8d373f-apiservice-cert\") pod \"packageserver-d55dfcdfc-ftgzg\" (UID: \"490a96f8-3a20-414a-b664-c2df9a8d373f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftgzg" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.630363 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ddb4cc6f-047d-4acb-ad5c-b216f5ceb8ce-certs\") pod \"machine-config-server-h2rt9\" (UID: \"ddb4cc6f-047d-4acb-ad5c-b216f5ceb8ce\") " pod="openshift-machine-config-operator/machine-config-server-h2rt9" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.630380 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87d34324-1bfd-47d6-8551-7bc545575a4a-cert\") pod \"ingress-canary-zjwcj\" (UID: \"87d34324-1bfd-47d6-8551-7bc545575a4a\") " pod="openshift-ingress-canary/ingress-canary-zjwcj" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.630398 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx7jk\" (UniqueName: \"kubernetes.io/projected/b372f112-486d-4848-a78f-552a485abacc-kube-api-access-mx7jk\") pod \"csi-hostpathplugin-jnvzr\" (UID: \"b372f112-486d-4848-a78f-552a485abacc\") " pod="hostpath-provisioner/csi-hostpathplugin-jnvzr" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.630443 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb0a83ee-f088-424f-b3c6-8ac8e2a50a27-config\") pod \"etcd-operator-b45778765-2bfkn\" (UID: \"eb0a83ee-f088-424f-b3c6-8ac8e2a50a27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2bfkn" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.630495 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9a1f4665-bd0e-4e79-948e-1c1894945013-encryption-config\") pod \"apiserver-7bbb656c7d-zfnkd\" (UID: \"9a1f4665-bd0e-4e79-948e-1c1894945013\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.630518 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f2s6\" (UniqueName: \"kubernetes.io/projected/aba4ae84-aa5c-4790-a5cc-b2c865bae5dd-kube-api-access-7f2s6\") pod \"marketplace-operator-79b997595-hb586\" (UID: \"aba4ae84-aa5c-4790-a5cc-b2c865bae5dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-hb586" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.630546 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/42497b95-3bd9-480d-9393-db14108c977e-image-import-ca\") pod \"apiserver-76f77b778f-p8t6g\" (UID: \"42497b95-3bd9-480d-9393-db14108c977e\") " pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.630574 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e477098-5f8a-4194-9125-806a2d8724ce-config\") pod \"authentication-operator-69f744f599-pfnhc\" (UID: \"1e477098-5f8a-4194-9125-806a2d8724ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pfnhc" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.630600 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/23ffd8bd-e6ca-4824-900e-08ba0cd80041-stats-auth\") pod \"router-default-5444994796-twd2b\" (UID: \"23ffd8bd-e6ca-4824-900e-08ba0cd80041\") " pod="openshift-ingress/router-default-5444994796-twd2b" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.630624 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxdlz\" (UniqueName: \"kubernetes.io/projected/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-kube-api-access-wxdlz\") pod \"console-f9d7485db-dbshk\" (UID: \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\") " pod="openshift-console/console-f9d7485db-dbshk" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.630665 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t92z\" (UniqueName: \"kubernetes.io/projected/98ec98f8-f0b2-4170-bb1b-49bf82c82c76-kube-api-access-9t92z\") pod \"migrator-59844c95c7-dmz29\" (UID: \"98ec98f8-f0b2-4170-bb1b-49bf82c82c76\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dmz29" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.630694 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-trusted-ca\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.630711 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mztb9\" (UniqueName: \"kubernetes.io/projected/4432b354-68b6-461c-98f9-11651a4ec51a-kube-api-access-mztb9\") pod \"multus-admission-controller-857f4d67dd-s94lq\" (UID: \"4432b354-68b6-461c-98f9-11651a4ec51a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-s94lq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.630727 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tpmn\" (UniqueName: \"kubernetes.io/projected/29bc7b45-9968-48c5-be01-2c8b7f39df13-kube-api-access-5tpmn\") pod \"ingress-operator-5b745b69d9-w98cd\" (UID: \"29bc7b45-9968-48c5-be01-2c8b7f39df13\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w98cd" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.630749 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a1f4665-bd0e-4e79-948e-1c1894945013-serving-cert\") pod \"apiserver-7bbb656c7d-zfnkd\" (UID: \"9a1f4665-bd0e-4e79-948e-1c1894945013\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.630767 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44854d9f-c150-47e2-b099-66e7a7f483e2-config\") pod \"service-ca-operator-777779d784-6dvkx\" (UID: \"44854d9f-c150-47e2-b099-66e7a7f483e2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6dvkx" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.630284 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a1f4665-bd0e-4e79-948e-1c1894945013-audit-dir\") pod \"apiserver-7bbb656c7d-zfnkd\" (UID: \"9a1f4665-bd0e-4e79-948e-1c1894945013\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.631674 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb0a83ee-f088-424f-b3c6-8ac8e2a50a27-config\") pod \"etcd-operator-b45778765-2bfkn\" (UID: \"eb0a83ee-f088-424f-b3c6-8ac8e2a50a27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2bfkn" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.631733 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb0a83ee-f088-424f-b3c6-8ac8e2a50a27-serving-cert\") pod \"etcd-operator-b45778765-2bfkn\" (UID: \"eb0a83ee-f088-424f-b3c6-8ac8e2a50a27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2bfkn" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.631754 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42ee8d72-7daa-4835-b976-db8e34dfdb3c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6brk2\" (UID: \"42ee8d72-7daa-4835-b976-db8e34dfdb3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6brk2" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.631773 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/23ffd8bd-e6ca-4824-900e-08ba0cd80041-default-certificate\") pod \"router-default-5444994796-twd2b\" (UID: \"23ffd8bd-e6ca-4824-900e-08ba0cd80041\") " pod="openshift-ingress/router-default-5444994796-twd2b" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.631774 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a1f4665-bd0e-4e79-948e-1c1894945013-audit-policies\") pod \"apiserver-7bbb656c7d-zfnkd\" (UID: \"9a1f4665-bd0e-4e79-948e-1c1894945013\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.631792 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-registry-tls\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.631816 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-registry-certificates\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.631861 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aba4ae84-aa5c-4790-a5cc-b2c865bae5dd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hb586\" (UID: \"aba4ae84-aa5c-4790-a5cc-b2c865bae5dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-hb586" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.631881 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/29bc7b45-9968-48c5-be01-2c8b7f39df13-bound-sa-token\") pod \"ingress-operator-5b745b69d9-w98cd\" (UID: \"29bc7b45-9968-48c5-be01-2c8b7f39df13\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w98cd" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.631902 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9a1f4665-bd0e-4e79-948e-1c1894945013-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-zfnkd\" (UID: \"9a1f4665-bd0e-4e79-948e-1c1894945013\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.631918 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42ee8d72-7daa-4835-b976-db8e34dfdb3c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6brk2\" (UID: \"42ee8d72-7daa-4835-b976-db8e34dfdb3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6brk2" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.631937 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kprnr\" (UniqueName: \"kubernetes.io/projected/42497b95-3bd9-480d-9393-db14108c977e-kube-api-access-kprnr\") pod \"apiserver-76f77b778f-p8t6g\" (UID: \"42497b95-3bd9-480d-9393-db14108c977e\") " pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.632034 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.632055 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.632075 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7148d1c-3586-4dae-a72d-543940574d2e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9rnn7\" (UID: \"a7148d1c-3586-4dae-a72d-543940574d2e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9rnn7" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.632115 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/42497b95-3bd9-480d-9393-db14108c977e-encryption-config\") pod \"apiserver-76f77b778f-p8t6g\" (UID: \"42497b95-3bd9-480d-9393-db14108c977e\") " pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.632141 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thnr2\" (UniqueName: \"kubernetes.io/projected/eb0a83ee-f088-424f-b3c6-8ac8e2a50a27-kube-api-access-thnr2\") pod \"etcd-operator-b45778765-2bfkn\" (UID: \"eb0a83ee-f088-424f-b3c6-8ac8e2a50a27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2bfkn" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.632178 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/77bd2504-9807-458c-80c3-a61e41bfcef2-metrics-tls\") pod \"dns-default-zbjgv\" (UID: \"77bd2504-9807-458c-80c3-a61e41bfcef2\") " pod="openshift-dns/dns-default-zbjgv" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.632175 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e477098-5f8a-4194-9125-806a2d8724ce-config\") pod \"authentication-operator-69f744f599-pfnhc\" (UID: \"1e477098-5f8a-4194-9125-806a2d8724ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pfnhc" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.632199 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/42497b95-3bd9-480d-9393-db14108c977e-node-pullsecrets\") pod \"apiserver-76f77b778f-p8t6g\" (UID: \"42497b95-3bd9-480d-9393-db14108c977e\") " pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.632302 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e88b4cc4-37fd-43c3-aaad-ea153ece7b28-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kq2mg\" (UID: \"e88b4cc4-37fd-43c3-aaad-ea153ece7b28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq2mg" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.632328 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4357984b-72f2-4c52-bae4-1dce4616b0df-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mspcl\" (UID: \"4357984b-72f2-4c52-bae4-1dce4616b0df\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mspcl" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.632348 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42497b95-3bd9-480d-9393-db14108c977e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-p8t6g\" (UID: \"42497b95-3bd9-480d-9393-db14108c977e\") " pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.632374 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-console-serving-cert\") pod \"console-f9d7485db-dbshk\" (UID: \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\") " pod="openshift-console/console-f9d7485db-dbshk" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.632392 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-console-oauth-config\") pod \"console-f9d7485db-dbshk\" (UID: \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\") " pod="openshift-console/console-f9d7485db-dbshk" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.632416 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/218135fe-157d-49e2-b391-acf0af7fdc3e-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-htmcq\" (UID: \"218135fe-157d-49e2-b391-acf0af7fdc3e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htmcq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.632212 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a1f4665-bd0e-4e79-948e-1c1894945013-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-zfnkd\" (UID: \"9a1f4665-bd0e-4e79-948e-1c1894945013\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.632486 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e1e76f4a-94bf-473d-9658-be90b7f79e56-metrics-tls\") pod \"dns-operator-744455d44c-8b79b\" (UID: \"e1e76f4a-94bf-473d-9658-be90b7f79e56\") " pod="openshift-dns-operator/dns-operator-744455d44c-8b79b" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.632512 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xpnc\" (UniqueName: \"kubernetes.io/projected/87d34324-1bfd-47d6-8551-7bc545575a4a-kube-api-access-6xpnc\") pod \"ingress-canary-zjwcj\" (UID: \"87d34324-1bfd-47d6-8551-7bc545575a4a\") " pod="openshift-ingress-canary/ingress-canary-zjwcj" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.632529 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b372f112-486d-4848-a78f-552a485abacc-mountpoint-dir\") pod \"csi-hostpathplugin-jnvzr\" (UID: \"b372f112-486d-4848-a78f-552a485abacc\") " pod="hostpath-provisioner/csi-hostpathplugin-jnvzr" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.632574 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/46aacb90-0d57-4080-96db-5e477c100fe8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-s9xxp\" (UID: \"46aacb90-0d57-4080-96db-5e477c100fe8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s9xxp" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.632594 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/46c9f20f-915c-48c9-b079-ed159fa09d70-proxy-tls\") pod \"machine-config-operator-74547568cd-f7scx\" (UID: \"46c9f20f-915c-48c9-b079-ed159fa09d70\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f7scx" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.632611 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77bd2504-9807-458c-80c3-a61e41bfcef2-config-volume\") pod \"dns-default-zbjgv\" (UID: \"77bd2504-9807-458c-80c3-a61e41bfcef2\") " pod="openshift-dns/dns-default-zbjgv" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.632629 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62g5x\" (UniqueName: \"kubernetes.io/projected/ddb4cc6f-047d-4acb-ad5c-b216f5ceb8ce-kube-api-access-62g5x\") pod \"machine-config-server-h2rt9\" (UID: \"ddb4cc6f-047d-4acb-ad5c-b216f5ceb8ce\") " pod="openshift-machine-config-operator/machine-config-server-h2rt9" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.632646 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cngjq\" (UniqueName: \"kubernetes.io/projected/d54b2ace-6596-4bcf-88cb-23f381105f80-kube-api-access-cngjq\") pod \"catalog-operator-68c6474976-f6klx\" (UID: \"d54b2ace-6596-4bcf-88cb-23f381105f80\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f6klx" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.632859 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/46aacb90-0d57-4080-96db-5e477c100fe8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-s9xxp\" (UID: \"46aacb90-0d57-4080-96db-5e477c100fe8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s9xxp" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.633026 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdn7m\" (UniqueName: \"kubernetes.io/projected/1e477098-5f8a-4194-9125-806a2d8724ce-kube-api-access-bdn7m\") pod \"authentication-operator-69f744f599-pfnhc\" (UID: \"1e477098-5f8a-4194-9125-806a2d8724ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pfnhc" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.633061 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8hcg\" (UniqueName: \"kubernetes.io/projected/46c9f20f-915c-48c9-b079-ed159fa09d70-kube-api-access-w8hcg\") pod \"machine-config-operator-74547568cd-f7scx\" (UID: \"46c9f20f-915c-48c9-b079-ed159fa09d70\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f7scx" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.633748 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-registry-certificates\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.633788 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/46aacb90-0d57-4080-96db-5e477c100fe8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-s9xxp\" (UID: \"46aacb90-0d57-4080-96db-5e477c100fe8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s9xxp" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.634095 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e477098-5f8a-4194-9125-806a2d8724ce-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pfnhc\" (UID: \"1e477098-5f8a-4194-9125-806a2d8724ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pfnhc" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.634147 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e477098-5f8a-4194-9125-806a2d8724ce-serving-cert\") pod \"authentication-operator-69f744f599-pfnhc\" (UID: \"1e477098-5f8a-4194-9125-806a2d8724ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pfnhc" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.634381 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.634611 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e477098-5f8a-4194-9125-806a2d8724ce-service-ca-bundle\") pod \"authentication-operator-69f744f599-pfnhc\" (UID: \"1e477098-5f8a-4194-9125-806a2d8724ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pfnhc" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.634646 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9a1f4665-bd0e-4e79-948e-1c1894945013-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-zfnkd\" (UID: \"9a1f4665-bd0e-4e79-948e-1c1894945013\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.634907 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jskfz\" (UniqueName: \"kubernetes.io/projected/44854d9f-c150-47e2-b099-66e7a7f483e2-kube-api-access-jskfz\") pod \"service-ca-operator-777779d784-6dvkx\" (UID: \"44854d9f-c150-47e2-b099-66e7a7f483e2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6dvkx" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.635052 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-registry-tls\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.635155 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/46c9f20f-915c-48c9-b079-ed159fa09d70-auth-proxy-config\") pod \"machine-config-operator-74547568cd-f7scx\" (UID: \"46c9f20f-915c-48c9-b079-ed159fa09d70\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f7scx" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.635288 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9a1f4665-bd0e-4e79-948e-1c1894945013-encryption-config\") pod \"apiserver-7bbb656c7d-zfnkd\" (UID: \"9a1f4665-bd0e-4e79-948e-1c1894945013\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.635345 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm47q\" (UniqueName: \"kubernetes.io/projected/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-kube-api-access-jm47q\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.635384 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/eb0a83ee-f088-424f-b3c6-8ac8e2a50a27-etcd-service-ca\") pod \"etcd-operator-b45778765-2bfkn\" (UID: \"eb0a83ee-f088-424f-b3c6-8ac8e2a50a27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2bfkn" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.635408 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f19f06ec-8bbf-4d8e-a6ef-f11e032454a9-signing-cabundle\") pod \"service-ca-9c57cc56f-6pc7v\" (UID: \"f19f06ec-8bbf-4d8e-a6ef-f11e032454a9\") " pod="openshift-service-ca/service-ca-9c57cc56f-6pc7v" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.635538 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2gxm\" (UniqueName: \"kubernetes.io/projected/4ececce9-bde7-4d8c-b4b8-75b4a8538c04-kube-api-access-h2gxm\") pod \"machine-config-controller-84d6567774-f9xkt\" (UID: \"4ececce9-bde7-4d8c-b4b8-75b4a8538c04\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9xkt" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.635589 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23ffd8bd-e6ca-4824-900e-08ba0cd80041-service-ca-bundle\") pod \"router-default-5444994796-twd2b\" (UID: \"23ffd8bd-e6ca-4824-900e-08ba0cd80041\") " pod="openshift-ingress/router-default-5444994796-twd2b" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.635613 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b372f112-486d-4848-a78f-552a485abacc-registration-dir\") pod \"csi-hostpathplugin-jnvzr\" (UID: \"b372f112-486d-4848-a78f-552a485abacc\") " pod="hostpath-provisioner/csi-hostpathplugin-jnvzr" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.635661 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfqsb\" (UniqueName: \"kubernetes.io/projected/9d4f5074-61bb-4e5b-91ce-d8149ddb50f1-kube-api-access-wfqsb\") pod \"collect-profiles-29340795-6q4fq\" (UID: \"9d4f5074-61bb-4e5b-91ce-d8149ddb50f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340795-6q4fq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.635705 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/46c9f20f-915c-48c9-b079-ed159fa09d70-images\") pod \"machine-config-operator-74547568cd-f7scx\" (UID: \"46c9f20f-915c-48c9-b079-ed159fa09d70\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f7scx" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.635775 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4ececce9-bde7-4d8c-b4b8-75b4a8538c04-proxy-tls\") pod \"machine-config-controller-84d6567774-f9xkt\" (UID: \"4ececce9-bde7-4d8c-b4b8-75b4a8538c04\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9xkt" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.635813 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7148d1c-3586-4dae-a72d-543940574d2e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9rnn7\" (UID: \"a7148d1c-3586-4dae-a72d-543940574d2e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9rnn7" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.635845 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d1a46958-489a-4357-adc6-ad2990dc19cd-srv-cert\") pod \"olm-operator-6b444d44fb-4g6pl\" (UID: \"d1a46958-489a-4357-adc6-ad2990dc19cd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4g6pl" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.635903 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw8l8\" (UniqueName: \"kubernetes.io/projected/46aacb90-0d57-4080-96db-5e477c100fe8-kube-api-access-qw8l8\") pod \"cluster-image-registry-operator-dc59b4c8b-s9xxp\" (UID: \"46aacb90-0d57-4080-96db-5e477c100fe8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s9xxp" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.635958 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b372f112-486d-4848-a78f-552a485abacc-plugins-dir\") pod \"csi-hostpathplugin-jnvzr\" (UID: \"b372f112-486d-4848-a78f-552a485abacc\") " pod="hostpath-provisioner/csi-hostpathplugin-jnvzr" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.636087 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29bc7b45-9968-48c5-be01-2c8b7f39df13-trusted-ca\") pod \"ingress-operator-5b745b69d9-w98cd\" (UID: \"29bc7b45-9968-48c5-be01-2c8b7f39df13\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w98cd" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.636128 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/eb0a83ee-f088-424f-b3c6-8ac8e2a50a27-etcd-ca\") pod \"etcd-operator-b45778765-2bfkn\" (UID: \"eb0a83ee-f088-424f-b3c6-8ac8e2a50a27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2bfkn" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.636132 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-service-ca\") pod \"console-f9d7485db-dbshk\" (UID: \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\") " pod="openshift-console/console-f9d7485db-dbshk" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.636194 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/490a96f8-3a20-414a-b664-c2df9a8d373f-tmpfs\") pod \"packageserver-d55dfcdfc-ftgzg\" (UID: \"490a96f8-3a20-414a-b664-c2df9a8d373f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftgzg" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.636210 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/eb0a83ee-f088-424f-b3c6-8ac8e2a50a27-etcd-client\") pod \"etcd-operator-b45778765-2bfkn\" (UID: \"eb0a83ee-f088-424f-b3c6-8ac8e2a50a27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2bfkn" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.636312 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f19f06ec-8bbf-4d8e-a6ef-f11e032454a9-signing-key\") pod \"service-ca-9c57cc56f-6pc7v\" (UID: \"f19f06ec-8bbf-4d8e-a6ef-f11e032454a9\") " pod="openshift-service-ca/service-ca-9c57cc56f-6pc7v" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.636389 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvk2x\" (UniqueName: \"kubernetes.io/projected/77bd2504-9807-458c-80c3-a61e41bfcef2-kube-api-access-rvk2x\") pod \"dns-default-zbjgv\" (UID: \"77bd2504-9807-458c-80c3-a61e41bfcef2\") " pod="openshift-dns/dns-default-zbjgv" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.636491 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/64be0777-3e55-42b0-8832-8c58f1980f27-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-b9mz9\" (UID: \"64be0777-3e55-42b0-8832-8c58f1980f27\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9mz9" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.636517 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d54b2ace-6596-4bcf-88cb-23f381105f80-profile-collector-cert\") pod \"catalog-operator-68c6474976-f6klx\" (UID: \"d54b2ace-6596-4bcf-88cb-23f381105f80\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f6klx" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.636512 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-trusted-ca\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.636616 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxpnb\" (UniqueName: \"kubernetes.io/projected/23ffd8bd-e6ca-4824-900e-08ba0cd80041-kube-api-access-xxpnb\") pod \"router-default-5444994796-twd2b\" (UID: \"23ffd8bd-e6ca-4824-900e-08ba0cd80041\") " pod="openshift-ingress/router-default-5444994796-twd2b" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.636684 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aba4ae84-aa5c-4790-a5cc-b2c865bae5dd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hb586\" (UID: \"aba4ae84-aa5c-4790-a5cc-b2c865bae5dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-hb586" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.636730 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42497b95-3bd9-480d-9393-db14108c977e-audit-dir\") pod \"apiserver-76f77b778f-p8t6g\" (UID: \"42497b95-3bd9-480d-9393-db14108c977e\") " pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.636794 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-bound-sa-token\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.636874 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgqdh\" (UniqueName: \"kubernetes.io/projected/490a96f8-3a20-414a-b664-c2df9a8d373f-kube-api-access-tgqdh\") pod \"packageserver-d55dfcdfc-ftgzg\" (UID: \"490a96f8-3a20-414a-b664-c2df9a8d373f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftgzg" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.636901 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/218135fe-157d-49e2-b391-acf0af7fdc3e-config\") pod \"kube-apiserver-operator-766d6c64bb-htmcq\" (UID: \"218135fe-157d-49e2-b391-acf0af7fdc3e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htmcq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.636937 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-console-config\") pod \"console-f9d7485db-dbshk\" (UID: \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\") " pod="openshift-console/console-f9d7485db-dbshk" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.636967 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d4f5074-61bb-4e5b-91ce-d8149ddb50f1-secret-volume\") pod \"collect-profiles-29340795-6q4fq\" (UID: \"9d4f5074-61bb-4e5b-91ce-d8149ddb50f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340795-6q4fq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.637018 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg7w2\" (UniqueName: \"kubernetes.io/projected/9a1f4665-bd0e-4e79-948e-1c1894945013-kube-api-access-gg7w2\") pod \"apiserver-7bbb656c7d-zfnkd\" (UID: \"9a1f4665-bd0e-4e79-948e-1c1894945013\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.637296 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e88b4cc4-37fd-43c3-aaad-ea153ece7b28-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kq2mg\" (UID: \"e88b4cc4-37fd-43c3-aaad-ea153ece7b28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq2mg" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.637362 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4ececce9-bde7-4d8c-b4b8-75b4a8538c04-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-f9xkt\" (UID: \"4ececce9-bde7-4d8c-b4b8-75b4a8538c04\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9xkt" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.637471 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42ee8d72-7daa-4835-b976-db8e34dfdb3c-config\") pod \"kube-controller-manager-operator-78b949d7b-6brk2\" (UID: \"42ee8d72-7daa-4835-b976-db8e34dfdb3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6brk2" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.637523 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42497b95-3bd9-480d-9393-db14108c977e-config\") pod \"apiserver-76f77b778f-p8t6g\" (UID: \"42497b95-3bd9-480d-9393-db14108c977e\") " pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.637578 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-749qj\" (UniqueName: \"kubernetes.io/projected/4357984b-72f2-4c52-bae4-1dce4616b0df-kube-api-access-749qj\") pod \"kube-storage-version-migrator-operator-b67b599dd-mspcl\" (UID: \"4357984b-72f2-4c52-bae4-1dce4616b0df\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mspcl" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.637609 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/42497b95-3bd9-480d-9393-db14108c977e-etcd-client\") pod \"apiserver-76f77b778f-p8t6g\" (UID: \"42497b95-3bd9-480d-9393-db14108c977e\") " pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.637632 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42497b95-3bd9-480d-9393-db14108c977e-serving-cert\") pod \"apiserver-76f77b778f-p8t6g\" (UID: \"42497b95-3bd9-480d-9393-db14108c977e\") " pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.637687 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjjmg\" (UniqueName: \"kubernetes.io/projected/e88b4cc4-37fd-43c3-aaad-ea153ece7b28-kube-api-access-jjjmg\") pod \"openshift-apiserver-operator-796bbdcf4f-kq2mg\" (UID: \"e88b4cc4-37fd-43c3-aaad-ea153ece7b28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq2mg" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.637735 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d1a46958-489a-4357-adc6-ad2990dc19cd-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4g6pl\" (UID: \"d1a46958-489a-4357-adc6-ad2990dc19cd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4g6pl" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.637821 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e477098-5f8a-4194-9125-806a2d8724ce-service-ca-bundle\") pod \"authentication-operator-69f744f599-pfnhc\" (UID: \"1e477098-5f8a-4194-9125-806a2d8724ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pfnhc" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.638988 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/46aacb90-0d57-4080-96db-5e477c100fe8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-s9xxp\" (UID: \"46aacb90-0d57-4080-96db-5e477c100fe8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s9xxp" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.639543 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/eb0a83ee-f088-424f-b3c6-8ac8e2a50a27-etcd-service-ca\") pod \"etcd-operator-b45778765-2bfkn\" (UID: \"eb0a83ee-f088-424f-b3c6-8ac8e2a50a27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2bfkn" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.639697 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9a1f4665-bd0e-4e79-948e-1c1894945013-etcd-client\") pod \"apiserver-7bbb656c7d-zfnkd\" (UID: \"9a1f4665-bd0e-4e79-948e-1c1894945013\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.640786 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.642297 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.643281 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb0a83ee-f088-424f-b3c6-8ac8e2a50a27-serving-cert\") pod \"etcd-operator-b45778765-2bfkn\" (UID: \"eb0a83ee-f088-424f-b3c6-8ac8e2a50a27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2bfkn" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.645342 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a1f4665-bd0e-4e79-948e-1c1894945013-serving-cert\") pod \"apiserver-7bbb656c7d-zfnkd\" (UID: \"9a1f4665-bd0e-4e79-948e-1c1894945013\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.662877 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.682935 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.702474 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.722747 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.738516 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:05 crc kubenswrapper[4725]: E1014 13:17:05.738672 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:06.23864675 +0000 UTC m=+143.087081559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.738795 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b372f112-486d-4848-a78f-552a485abacc-csi-data-dir\") pod \"csi-hostpathplugin-jnvzr\" (UID: \"b372f112-486d-4848-a78f-552a485abacc\") " pod="hostpath-provisioner/csi-hostpathplugin-jnvzr" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.738830 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.738854 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49slr\" (UniqueName: \"kubernetes.io/projected/f19f06ec-8bbf-4d8e-a6ef-f11e032454a9-kube-api-access-49slr\") pod \"service-ca-9c57cc56f-6pc7v\" (UID: \"f19f06ec-8bbf-4d8e-a6ef-f11e032454a9\") " pod="openshift-service-ca/service-ca-9c57cc56f-6pc7v" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.738879 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qthm8\" (UniqueName: \"kubernetes.io/projected/e1e76f4a-94bf-473d-9658-be90b7f79e56-kube-api-access-qthm8\") pod \"dns-operator-744455d44c-8b79b\" (UID: \"e1e76f4a-94bf-473d-9658-be90b7f79e56\") " pod="openshift-dns-operator/dns-operator-744455d44c-8b79b" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.738899 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d4f5074-61bb-4e5b-91ce-d8149ddb50f1-config-volume\") pod \"collect-profiles-29340795-6q4fq\" (UID: \"9d4f5074-61bb-4e5b-91ce-d8149ddb50f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340795-6q4fq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.738921 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-oauth-serving-cert\") pod \"console-f9d7485db-dbshk\" (UID: \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\") " pod="openshift-console/console-f9d7485db-dbshk" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.738942 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn5nq\" (UniqueName: \"kubernetes.io/projected/10813d7e-3ed3-49a7-a2ad-5aa0db76a25d-kube-api-access-zn5nq\") pod \"control-plane-machine-set-operator-78cbb6b69f-658kg\" (UID: \"10813d7e-3ed3-49a7-a2ad-5aa0db76a25d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-658kg" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.738969 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7148d1c-3586-4dae-a72d-543940574d2e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9rnn7\" (UID: \"a7148d1c-3586-4dae-a72d-543940574d2e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9rnn7" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.738990 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/29bc7b45-9968-48c5-be01-2c8b7f39df13-metrics-tls\") pod \"ingress-operator-5b745b69d9-w98cd\" (UID: \"29bc7b45-9968-48c5-be01-2c8b7f39df13\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w98cd" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.739003 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b372f112-486d-4848-a78f-552a485abacc-csi-data-dir\") pod \"csi-hostpathplugin-jnvzr\" (UID: \"b372f112-486d-4848-a78f-552a485abacc\") " pod="hostpath-provisioner/csi-hostpathplugin-jnvzr" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.739216 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4357984b-72f2-4c52-bae4-1dce4616b0df-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mspcl\" (UID: \"4357984b-72f2-4c52-bae4-1dce4616b0df\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mspcl" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.739272 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/218135fe-157d-49e2-b391-acf0af7fdc3e-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-htmcq\" (UID: \"218135fe-157d-49e2-b391-acf0af7fdc3e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htmcq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.739307 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5zp7\" (UniqueName: \"kubernetes.io/projected/d1a46958-489a-4357-adc6-ad2990dc19cd-kube-api-access-r5zp7\") pod \"olm-operator-6b444d44fb-4g6pl\" (UID: \"d1a46958-489a-4357-adc6-ad2990dc19cd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4g6pl" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.739334 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-trusted-ca-bundle\") pod \"console-f9d7485db-dbshk\" (UID: \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\") " pod="openshift-console/console-f9d7485db-dbshk" Oct 14 13:17:05 crc kubenswrapper[4725]: E1014 13:17:05.739350 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:06.239340519 +0000 UTC m=+143.087775538 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.739375 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b372f112-486d-4848-a78f-552a485abacc-socket-dir\") pod \"csi-hostpathplugin-jnvzr\" (UID: \"b372f112-486d-4848-a78f-552a485abacc\") " pod="hostpath-provisioner/csi-hostpathplugin-jnvzr" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.739402 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp7lp\" (UniqueName: \"kubernetes.io/projected/3fd3f30a-c1ac-4a8f-8d95-9d4165d608a2-kube-api-access-xp7lp\") pod \"downloads-7954f5f757-4v6wb\" (UID: \"3fd3f30a-c1ac-4a8f-8d95-9d4165d608a2\") " pod="openshift-console/downloads-7954f5f757-4v6wb" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.739431 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/490a96f8-3a20-414a-b664-c2df9a8d373f-apiservice-cert\") pod \"packageserver-d55dfcdfc-ftgzg\" (UID: \"490a96f8-3a20-414a-b664-c2df9a8d373f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftgzg" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.739468 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ddb4cc6f-047d-4acb-ad5c-b216f5ceb8ce-certs\") pod \"machine-config-server-h2rt9\" (UID: \"ddb4cc6f-047d-4acb-ad5c-b216f5ceb8ce\") " pod="openshift-machine-config-operator/machine-config-server-h2rt9" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.739491 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87d34324-1bfd-47d6-8551-7bc545575a4a-cert\") pod \"ingress-canary-zjwcj\" (UID: \"87d34324-1bfd-47d6-8551-7bc545575a4a\") " pod="openshift-ingress-canary/ingress-canary-zjwcj" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.739525 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx7jk\" (UniqueName: \"kubernetes.io/projected/b372f112-486d-4848-a78f-552a485abacc-kube-api-access-mx7jk\") pod \"csi-hostpathplugin-jnvzr\" (UID: \"b372f112-486d-4848-a78f-552a485abacc\") " pod="hostpath-provisioner/csi-hostpathplugin-jnvzr" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.739564 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f2s6\" (UniqueName: \"kubernetes.io/projected/aba4ae84-aa5c-4790-a5cc-b2c865bae5dd-kube-api-access-7f2s6\") pod \"marketplace-operator-79b997595-hb586\" (UID: \"aba4ae84-aa5c-4790-a5cc-b2c865bae5dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-hb586" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.739588 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/42497b95-3bd9-480d-9393-db14108c977e-image-import-ca\") pod \"apiserver-76f77b778f-p8t6g\" (UID: \"42497b95-3bd9-480d-9393-db14108c977e\") " pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.739612 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/23ffd8bd-e6ca-4824-900e-08ba0cd80041-stats-auth\") pod \"router-default-5444994796-twd2b\" (UID: \"23ffd8bd-e6ca-4824-900e-08ba0cd80041\") " pod="openshift-ingress/router-default-5444994796-twd2b" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.739634 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxdlz\" (UniqueName: \"kubernetes.io/projected/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-kube-api-access-wxdlz\") pod \"console-f9d7485db-dbshk\" (UID: \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\") " pod="openshift-console/console-f9d7485db-dbshk" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.739658 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t92z\" (UniqueName: \"kubernetes.io/projected/98ec98f8-f0b2-4170-bb1b-49bf82c82c76-kube-api-access-9t92z\") pod \"migrator-59844c95c7-dmz29\" (UID: \"98ec98f8-f0b2-4170-bb1b-49bf82c82c76\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dmz29" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.739684 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mztb9\" (UniqueName: \"kubernetes.io/projected/4432b354-68b6-461c-98f9-11651a4ec51a-kube-api-access-mztb9\") pod \"multus-admission-controller-857f4d67dd-s94lq\" (UID: \"4432b354-68b6-461c-98f9-11651a4ec51a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-s94lq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.739711 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tpmn\" (UniqueName: \"kubernetes.io/projected/29bc7b45-9968-48c5-be01-2c8b7f39df13-kube-api-access-5tpmn\") pod \"ingress-operator-5b745b69d9-w98cd\" (UID: \"29bc7b45-9968-48c5-be01-2c8b7f39df13\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w98cd" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.739735 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44854d9f-c150-47e2-b099-66e7a7f483e2-config\") pod \"service-ca-operator-777779d784-6dvkx\" (UID: \"44854d9f-c150-47e2-b099-66e7a7f483e2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6dvkx" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.739765 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42ee8d72-7daa-4835-b976-db8e34dfdb3c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6brk2\" (UID: \"42ee8d72-7daa-4835-b976-db8e34dfdb3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6brk2" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.739791 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/23ffd8bd-e6ca-4824-900e-08ba0cd80041-default-certificate\") pod \"router-default-5444994796-twd2b\" (UID: \"23ffd8bd-e6ca-4824-900e-08ba0cd80041\") " pod="openshift-ingress/router-default-5444994796-twd2b" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.739828 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aba4ae84-aa5c-4790-a5cc-b2c865bae5dd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hb586\" (UID: \"aba4ae84-aa5c-4790-a5cc-b2c865bae5dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-hb586" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.740042 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/29bc7b45-9968-48c5-be01-2c8b7f39df13-bound-sa-token\") pod \"ingress-operator-5b745b69d9-w98cd\" (UID: \"29bc7b45-9968-48c5-be01-2c8b7f39df13\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w98cd" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.740066 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42ee8d72-7daa-4835-b976-db8e34dfdb3c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6brk2\" (UID: \"42ee8d72-7daa-4835-b976-db8e34dfdb3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6brk2" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.740087 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kprnr\" (UniqueName: \"kubernetes.io/projected/42497b95-3bd9-480d-9393-db14108c977e-kube-api-access-kprnr\") pod \"apiserver-76f77b778f-p8t6g\" (UID: \"42497b95-3bd9-480d-9393-db14108c977e\") " pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.740168 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7148d1c-3586-4dae-a72d-543940574d2e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9rnn7\" (UID: \"a7148d1c-3586-4dae-a72d-543940574d2e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9rnn7" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.740217 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/42497b95-3bd9-480d-9393-db14108c977e-encryption-config\") pod \"apiserver-76f77b778f-p8t6g\" (UID: \"42497b95-3bd9-480d-9393-db14108c977e\") " pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.740251 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/77bd2504-9807-458c-80c3-a61e41bfcef2-metrics-tls\") pod \"dns-default-zbjgv\" (UID: \"77bd2504-9807-458c-80c3-a61e41bfcef2\") " pod="openshift-dns/dns-default-zbjgv" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.740272 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/42497b95-3bd9-480d-9393-db14108c977e-node-pullsecrets\") pod \"apiserver-76f77b778f-p8t6g\" (UID: \"42497b95-3bd9-480d-9393-db14108c977e\") " pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.740529 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-oauth-serving-cert\") pod \"console-f9d7485db-dbshk\" (UID: \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\") " pod="openshift-console/console-f9d7485db-dbshk" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.740583 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e88b4cc4-37fd-43c3-aaad-ea153ece7b28-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kq2mg\" (UID: \"e88b4cc4-37fd-43c3-aaad-ea153ece7b28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq2mg" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.740630 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4357984b-72f2-4c52-bae4-1dce4616b0df-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mspcl\" (UID: \"4357984b-72f2-4c52-bae4-1dce4616b0df\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mspcl" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.740652 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42497b95-3bd9-480d-9393-db14108c977e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-p8t6g\" (UID: \"42497b95-3bd9-480d-9393-db14108c977e\") " pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.740673 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-console-serving-cert\") pod \"console-f9d7485db-dbshk\" (UID: \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\") " pod="openshift-console/console-f9d7485db-dbshk" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.740694 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-console-oauth-config\") pod \"console-f9d7485db-dbshk\" (UID: \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\") " pod="openshift-console/console-f9d7485db-dbshk" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.740714 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/218135fe-157d-49e2-b391-acf0af7fdc3e-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-htmcq\" (UID: \"218135fe-157d-49e2-b391-acf0af7fdc3e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htmcq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.740736 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e1e76f4a-94bf-473d-9658-be90b7f79e56-metrics-tls\") pod \"dns-operator-744455d44c-8b79b\" (UID: \"e1e76f4a-94bf-473d-9658-be90b7f79e56\") " pod="openshift-dns-operator/dns-operator-744455d44c-8b79b" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.740757 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xpnc\" (UniqueName: \"kubernetes.io/projected/87d34324-1bfd-47d6-8551-7bc545575a4a-kube-api-access-6xpnc\") pod \"ingress-canary-zjwcj\" (UID: \"87d34324-1bfd-47d6-8551-7bc545575a4a\") " pod="openshift-ingress-canary/ingress-canary-zjwcj" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.740780 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-trusted-ca-bundle\") pod \"console-f9d7485db-dbshk\" (UID: \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\") " pod="openshift-console/console-f9d7485db-dbshk" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.740778 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b372f112-486d-4848-a78f-552a485abacc-mountpoint-dir\") pod \"csi-hostpathplugin-jnvzr\" (UID: \"b372f112-486d-4848-a78f-552a485abacc\") " pod="hostpath-provisioner/csi-hostpathplugin-jnvzr" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.740851 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/46c9f20f-915c-48c9-b079-ed159fa09d70-proxy-tls\") pod \"machine-config-operator-74547568cd-f7scx\" (UID: \"46c9f20f-915c-48c9-b079-ed159fa09d70\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f7scx" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.740873 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77bd2504-9807-458c-80c3-a61e41bfcef2-config-volume\") pod \"dns-default-zbjgv\" (UID: \"77bd2504-9807-458c-80c3-a61e41bfcef2\") " pod="openshift-dns/dns-default-zbjgv" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.740892 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62g5x\" (UniqueName: \"kubernetes.io/projected/ddb4cc6f-047d-4acb-ad5c-b216f5ceb8ce-kube-api-access-62g5x\") pod \"machine-config-server-h2rt9\" (UID: \"ddb4cc6f-047d-4acb-ad5c-b216f5ceb8ce\") " pod="openshift-machine-config-operator/machine-config-server-h2rt9" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.740759 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b372f112-486d-4848-a78f-552a485abacc-socket-dir\") pod \"csi-hostpathplugin-jnvzr\" (UID: \"b372f112-486d-4848-a78f-552a485abacc\") " pod="hostpath-provisioner/csi-hostpathplugin-jnvzr" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.740910 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cngjq\" (UniqueName: \"kubernetes.io/projected/d54b2ace-6596-4bcf-88cb-23f381105f80-kube-api-access-cngjq\") pod \"catalog-operator-68c6474976-f6klx\" (UID: \"d54b2ace-6596-4bcf-88cb-23f381105f80\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f6klx" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.740937 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8hcg\" (UniqueName: \"kubernetes.io/projected/46c9f20f-915c-48c9-b079-ed159fa09d70-kube-api-access-w8hcg\") pod \"machine-config-operator-74547568cd-f7scx\" (UID: \"46c9f20f-915c-48c9-b079-ed159fa09d70\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f7scx" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.740965 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jskfz\" (UniqueName: \"kubernetes.io/projected/44854d9f-c150-47e2-b099-66e7a7f483e2-kube-api-access-jskfz\") pod \"service-ca-operator-777779d784-6dvkx\" (UID: \"44854d9f-c150-47e2-b099-66e7a7f483e2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6dvkx" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.740993 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/46c9f20f-915c-48c9-b079-ed159fa09d70-auth-proxy-config\") pod \"machine-config-operator-74547568cd-f7scx\" (UID: \"46c9f20f-915c-48c9-b079-ed159fa09d70\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f7scx" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.741019 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f19f06ec-8bbf-4d8e-a6ef-f11e032454a9-signing-cabundle\") pod \"service-ca-9c57cc56f-6pc7v\" (UID: \"f19f06ec-8bbf-4d8e-a6ef-f11e032454a9\") " pod="openshift-service-ca/service-ca-9c57cc56f-6pc7v" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.741042 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2gxm\" (UniqueName: \"kubernetes.io/projected/4ececce9-bde7-4d8c-b4b8-75b4a8538c04-kube-api-access-h2gxm\") pod \"machine-config-controller-84d6567774-f9xkt\" (UID: \"4ececce9-bde7-4d8c-b4b8-75b4a8538c04\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9xkt" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.741063 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23ffd8bd-e6ca-4824-900e-08ba0cd80041-service-ca-bundle\") pod \"router-default-5444994796-twd2b\" (UID: \"23ffd8bd-e6ca-4824-900e-08ba0cd80041\") " pod="openshift-ingress/router-default-5444994796-twd2b" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.741080 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b372f112-486d-4848-a78f-552a485abacc-registration-dir\") pod \"csi-hostpathplugin-jnvzr\" (UID: \"b372f112-486d-4848-a78f-552a485abacc\") " pod="hostpath-provisioner/csi-hostpathplugin-jnvzr" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.741111 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfqsb\" (UniqueName: \"kubernetes.io/projected/9d4f5074-61bb-4e5b-91ce-d8149ddb50f1-kube-api-access-wfqsb\") pod \"collect-profiles-29340795-6q4fq\" (UID: \"9d4f5074-61bb-4e5b-91ce-d8149ddb50f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340795-6q4fq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.741133 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/46c9f20f-915c-48c9-b079-ed159fa09d70-images\") pod \"machine-config-operator-74547568cd-f7scx\" (UID: \"46c9f20f-915c-48c9-b079-ed159fa09d70\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f7scx" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.741154 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4ececce9-bde7-4d8c-b4b8-75b4a8538c04-proxy-tls\") pod \"machine-config-controller-84d6567774-f9xkt\" (UID: \"4ececce9-bde7-4d8c-b4b8-75b4a8538c04\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9xkt" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.741170 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7148d1c-3586-4dae-a72d-543940574d2e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9rnn7\" (UID: \"a7148d1c-3586-4dae-a72d-543940574d2e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9rnn7" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.741185 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d1a46958-489a-4357-adc6-ad2990dc19cd-srv-cert\") pod \"olm-operator-6b444d44fb-4g6pl\" (UID: \"d1a46958-489a-4357-adc6-ad2990dc19cd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4g6pl" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.741205 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b372f112-486d-4848-a78f-552a485abacc-plugins-dir\") pod \"csi-hostpathplugin-jnvzr\" (UID: \"b372f112-486d-4848-a78f-552a485abacc\") " pod="hostpath-provisioner/csi-hostpathplugin-jnvzr" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.741222 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29bc7b45-9968-48c5-be01-2c8b7f39df13-trusted-ca\") pod \"ingress-operator-5b745b69d9-w98cd\" (UID: \"29bc7b45-9968-48c5-be01-2c8b7f39df13\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w98cd" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.741239 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-service-ca\") pod \"console-f9d7485db-dbshk\" (UID: \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\") " pod="openshift-console/console-f9d7485db-dbshk" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.741289 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/490a96f8-3a20-414a-b664-c2df9a8d373f-tmpfs\") pod \"packageserver-d55dfcdfc-ftgzg\" (UID: \"490a96f8-3a20-414a-b664-c2df9a8d373f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftgzg" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.741307 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f19f06ec-8bbf-4d8e-a6ef-f11e032454a9-signing-key\") pod \"service-ca-9c57cc56f-6pc7v\" (UID: \"f19f06ec-8bbf-4d8e-a6ef-f11e032454a9\") " pod="openshift-service-ca/service-ca-9c57cc56f-6pc7v" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.741328 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvk2x\" (UniqueName: \"kubernetes.io/projected/77bd2504-9807-458c-80c3-a61e41bfcef2-kube-api-access-rvk2x\") pod \"dns-default-zbjgv\" (UID: \"77bd2504-9807-458c-80c3-a61e41bfcef2\") " pod="openshift-dns/dns-default-zbjgv" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.741380 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/64be0777-3e55-42b0-8832-8c58f1980f27-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-b9mz9\" (UID: \"64be0777-3e55-42b0-8832-8c58f1980f27\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9mz9" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.741403 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d54b2ace-6596-4bcf-88cb-23f381105f80-profile-collector-cert\") pod \"catalog-operator-68c6474976-f6klx\" (UID: \"d54b2ace-6596-4bcf-88cb-23f381105f80\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f6klx" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.741474 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxpnb\" (UniqueName: \"kubernetes.io/projected/23ffd8bd-e6ca-4824-900e-08ba0cd80041-kube-api-access-xxpnb\") pod \"router-default-5444994796-twd2b\" (UID: \"23ffd8bd-e6ca-4824-900e-08ba0cd80041\") " pod="openshift-ingress/router-default-5444994796-twd2b" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.741497 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aba4ae84-aa5c-4790-a5cc-b2c865bae5dd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hb586\" (UID: \"aba4ae84-aa5c-4790-a5cc-b2c865bae5dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-hb586" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.741517 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42497b95-3bd9-480d-9393-db14108c977e-audit-dir\") pod \"apiserver-76f77b778f-p8t6g\" (UID: \"42497b95-3bd9-480d-9393-db14108c977e\") " pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.741552 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgqdh\" (UniqueName: \"kubernetes.io/projected/490a96f8-3a20-414a-b664-c2df9a8d373f-kube-api-access-tgqdh\") pod \"packageserver-d55dfcdfc-ftgzg\" (UID: \"490a96f8-3a20-414a-b664-c2df9a8d373f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftgzg" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.741569 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/218135fe-157d-49e2-b391-acf0af7fdc3e-config\") pod \"kube-apiserver-operator-766d6c64bb-htmcq\" (UID: \"218135fe-157d-49e2-b391-acf0af7fdc3e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htmcq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.741585 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-console-config\") pod \"console-f9d7485db-dbshk\" (UID: \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\") " pod="openshift-console/console-f9d7485db-dbshk" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.741601 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d4f5074-61bb-4e5b-91ce-d8149ddb50f1-secret-volume\") pod \"collect-profiles-29340795-6q4fq\" (UID: \"9d4f5074-61bb-4e5b-91ce-d8149ddb50f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340795-6q4fq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.741619 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/42497b95-3bd9-480d-9393-db14108c977e-node-pullsecrets\") pod \"apiserver-76f77b778f-p8t6g\" (UID: \"42497b95-3bd9-480d-9393-db14108c977e\") " pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.741663 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e88b4cc4-37fd-43c3-aaad-ea153ece7b28-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kq2mg\" (UID: \"e88b4cc4-37fd-43c3-aaad-ea153ece7b28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq2mg" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.741684 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4ececce9-bde7-4d8c-b4b8-75b4a8538c04-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-f9xkt\" (UID: \"4ececce9-bde7-4d8c-b4b8-75b4a8538c04\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9xkt" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.741704 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42ee8d72-7daa-4835-b976-db8e34dfdb3c-config\") pod \"kube-controller-manager-operator-78b949d7b-6brk2\" (UID: \"42ee8d72-7daa-4835-b976-db8e34dfdb3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6brk2" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.741720 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42497b95-3bd9-480d-9393-db14108c977e-config\") pod \"apiserver-76f77b778f-p8t6g\" (UID: \"42497b95-3bd9-480d-9393-db14108c977e\") " pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.741767 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-749qj\" (UniqueName: \"kubernetes.io/projected/4357984b-72f2-4c52-bae4-1dce4616b0df-kube-api-access-749qj\") pod \"kube-storage-version-migrator-operator-b67b599dd-mspcl\" (UID: \"4357984b-72f2-4c52-bae4-1dce4616b0df\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mspcl" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.741786 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/42497b95-3bd9-480d-9393-db14108c977e-etcd-client\") pod \"apiserver-76f77b778f-p8t6g\" (UID: \"42497b95-3bd9-480d-9393-db14108c977e\") " pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.741815 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42497b95-3bd9-480d-9393-db14108c977e-serving-cert\") pod \"apiserver-76f77b778f-p8t6g\" (UID: \"42497b95-3bd9-480d-9393-db14108c977e\") " pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.741868 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjjmg\" (UniqueName: \"kubernetes.io/projected/e88b4cc4-37fd-43c3-aaad-ea153ece7b28-kube-api-access-jjjmg\") pod \"openshift-apiserver-operator-796bbdcf4f-kq2mg\" (UID: \"e88b4cc4-37fd-43c3-aaad-ea153ece7b28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq2mg" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.742425 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d1a46958-489a-4357-adc6-ad2990dc19cd-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4g6pl\" (UID: \"d1a46958-489a-4357-adc6-ad2990dc19cd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4g6pl" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.742465 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/10813d7e-3ed3-49a7-a2ad-5aa0db76a25d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-658kg\" (UID: \"10813d7e-3ed3-49a7-a2ad-5aa0db76a25d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-658kg" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.742491 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4432b354-68b6-461c-98f9-11651a4ec51a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-s94lq\" (UID: \"4432b354-68b6-461c-98f9-11651a4ec51a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-s94lq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.742538 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/42497b95-3bd9-480d-9393-db14108c977e-audit\") pod \"apiserver-76f77b778f-p8t6g\" (UID: \"42497b95-3bd9-480d-9393-db14108c977e\") " pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.742556 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/42497b95-3bd9-480d-9393-db14108c977e-etcd-serving-ca\") pod \"apiserver-76f77b778f-p8t6g\" (UID: \"42497b95-3bd9-480d-9393-db14108c977e\") " pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.742577 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d54b2ace-6596-4bcf-88cb-23f381105f80-srv-cert\") pod \"catalog-operator-68c6474976-f6klx\" (UID: \"d54b2ace-6596-4bcf-88cb-23f381105f80\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f6klx" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.742600 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44854d9f-c150-47e2-b099-66e7a7f483e2-serving-cert\") pod \"service-ca-operator-777779d784-6dvkx\" (UID: \"44854d9f-c150-47e2-b099-66e7a7f483e2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6dvkx" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.742618 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/490a96f8-3a20-414a-b664-c2df9a8d373f-webhook-cert\") pod \"packageserver-d55dfcdfc-ftgzg\" (UID: \"490a96f8-3a20-414a-b664-c2df9a8d373f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftgzg" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.742637 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23ffd8bd-e6ca-4824-900e-08ba0cd80041-metrics-certs\") pod \"router-default-5444994796-twd2b\" (UID: \"23ffd8bd-e6ca-4824-900e-08ba0cd80041\") " pod="openshift-ingress/router-default-5444994796-twd2b" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.742701 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ddb4cc6f-047d-4acb-ad5c-b216f5ceb8ce-node-bootstrap-token\") pod \"machine-config-server-h2rt9\" (UID: \"ddb4cc6f-047d-4acb-ad5c-b216f5ceb8ce\") " pod="openshift-machine-config-operator/machine-config-server-h2rt9" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.742722 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsr6n\" (UniqueName: \"kubernetes.io/projected/64be0777-3e55-42b0-8832-8c58f1980f27-kube-api-access-gsr6n\") pod \"package-server-manager-789f6589d5-b9mz9\" (UID: \"64be0777-3e55-42b0-8832-8c58f1980f27\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9mz9" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.742962 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b372f112-486d-4848-a78f-552a485abacc-registration-dir\") pod \"csi-hostpathplugin-jnvzr\" (UID: \"b372f112-486d-4848-a78f-552a485abacc\") " pod="hostpath-provisioner/csi-hostpathplugin-jnvzr" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.743019 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42497b95-3bd9-480d-9393-db14108c977e-audit-dir\") pod \"apiserver-76f77b778f-p8t6g\" (UID: \"42497b95-3bd9-480d-9393-db14108c977e\") " pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.743668 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4357984b-72f2-4c52-bae4-1dce4616b0df-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mspcl\" (UID: \"4357984b-72f2-4c52-bae4-1dce4616b0df\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mspcl" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.743721 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23ffd8bd-e6ca-4824-900e-08ba0cd80041-service-ca-bundle\") pod \"router-default-5444994796-twd2b\" (UID: \"23ffd8bd-e6ca-4824-900e-08ba0cd80041\") " pod="openshift-ingress/router-default-5444994796-twd2b" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.744677 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-console-config\") pod \"console-f9d7485db-dbshk\" (UID: \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\") " pod="openshift-console/console-f9d7485db-dbshk" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.744734 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-service-ca\") pod \"console-f9d7485db-dbshk\" (UID: \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\") " pod="openshift-console/console-f9d7485db-dbshk" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.744726 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7148d1c-3586-4dae-a72d-543940574d2e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9rnn7\" (UID: \"a7148d1c-3586-4dae-a72d-543940574d2e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9rnn7" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.744793 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b372f112-486d-4848-a78f-552a485abacc-plugins-dir\") pod \"csi-hostpathplugin-jnvzr\" (UID: \"b372f112-486d-4848-a78f-552a485abacc\") " pod="hostpath-provisioner/csi-hostpathplugin-jnvzr" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.744796 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/46c9f20f-915c-48c9-b079-ed159fa09d70-auth-proxy-config\") pod \"machine-config-operator-74547568cd-f7scx\" (UID: \"46c9f20f-915c-48c9-b079-ed159fa09d70\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f7scx" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.744968 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/218135fe-157d-49e2-b391-acf0af7fdc3e-config\") pod \"kube-apiserver-operator-766d6c64bb-htmcq\" (UID: \"218135fe-157d-49e2-b391-acf0af7fdc3e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htmcq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.745384 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b372f112-486d-4848-a78f-552a485abacc-mountpoint-dir\") pod \"csi-hostpathplugin-jnvzr\" (UID: \"b372f112-486d-4848-a78f-552a485abacc\") " pod="hostpath-provisioner/csi-hostpathplugin-jnvzr" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.745673 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.746051 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/490a96f8-3a20-414a-b664-c2df9a8d373f-tmpfs\") pod \"packageserver-d55dfcdfc-ftgzg\" (UID: \"490a96f8-3a20-414a-b664-c2df9a8d373f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftgzg" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.746379 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42ee8d72-7daa-4835-b976-db8e34dfdb3c-config\") pod \"kube-controller-manager-operator-78b949d7b-6brk2\" (UID: \"42ee8d72-7daa-4835-b976-db8e34dfdb3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6brk2" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.747650 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4ececce9-bde7-4d8c-b4b8-75b4a8538c04-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-f9xkt\" (UID: \"4ececce9-bde7-4d8c-b4b8-75b4a8538c04\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9xkt" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.748155 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4357984b-72f2-4c52-bae4-1dce4616b0df-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mspcl\" (UID: \"4357984b-72f2-4c52-bae4-1dce4616b0df\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mspcl" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.748234 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29bc7b45-9968-48c5-be01-2c8b7f39df13-trusted-ca\") pod \"ingress-operator-5b745b69d9-w98cd\" (UID: \"29bc7b45-9968-48c5-be01-2c8b7f39df13\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w98cd" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.748245 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42497b95-3bd9-480d-9393-db14108c977e-config\") pod \"apiserver-76f77b778f-p8t6g\" (UID: \"42497b95-3bd9-480d-9393-db14108c977e\") " pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.748487 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/29bc7b45-9968-48c5-be01-2c8b7f39df13-metrics-tls\") pod \"ingress-operator-5b745b69d9-w98cd\" (UID: \"29bc7b45-9968-48c5-be01-2c8b7f39df13\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w98cd" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.748508 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e88b4cc4-37fd-43c3-aaad-ea153ece7b28-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kq2mg\" (UID: \"e88b4cc4-37fd-43c3-aaad-ea153ece7b28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq2mg" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.749144 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/23ffd8bd-e6ca-4824-900e-08ba0cd80041-stats-auth\") pod \"router-default-5444994796-twd2b\" (UID: \"23ffd8bd-e6ca-4824-900e-08ba0cd80041\") " pod="openshift-ingress/router-default-5444994796-twd2b" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.749366 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/23ffd8bd-e6ca-4824-900e-08ba0cd80041-default-certificate\") pod \"router-default-5444994796-twd2b\" (UID: \"23ffd8bd-e6ca-4824-900e-08ba0cd80041\") " pod="openshift-ingress/router-default-5444994796-twd2b" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.749470 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/42497b95-3bd9-480d-9393-db14108c977e-encryption-config\") pod \"apiserver-76f77b778f-p8t6g\" (UID: \"42497b95-3bd9-480d-9393-db14108c977e\") " pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.749728 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e1e76f4a-94bf-473d-9658-be90b7f79e56-metrics-tls\") pod \"dns-operator-744455d44c-8b79b\" (UID: \"e1e76f4a-94bf-473d-9658-be90b7f79e56\") " pod="openshift-dns-operator/dns-operator-744455d44c-8b79b" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.750107 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4ececce9-bde7-4d8c-b4b8-75b4a8538c04-proxy-tls\") pod \"machine-config-controller-84d6567774-f9xkt\" (UID: \"4ececce9-bde7-4d8c-b4b8-75b4a8538c04\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9xkt" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.751346 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-console-serving-cert\") pod \"console-f9d7485db-dbshk\" (UID: \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\") " pod="openshift-console/console-f9d7485db-dbshk" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.752015 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/10813d7e-3ed3-49a7-a2ad-5aa0db76a25d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-658kg\" (UID: \"10813d7e-3ed3-49a7-a2ad-5aa0db76a25d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-658kg" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.752517 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7148d1c-3586-4dae-a72d-543940574d2e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9rnn7\" (UID: \"a7148d1c-3586-4dae-a72d-543940574d2e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9rnn7" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.752912 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e88b4cc4-37fd-43c3-aaad-ea153ece7b28-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kq2mg\" (UID: \"e88b4cc4-37fd-43c3-aaad-ea153ece7b28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq2mg" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.753085 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-console-oauth-config\") pod \"console-f9d7485db-dbshk\" (UID: \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\") " pod="openshift-console/console-f9d7485db-dbshk" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.759492 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/218135fe-157d-49e2-b391-acf0af7fdc3e-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-htmcq\" (UID: \"218135fe-157d-49e2-b391-acf0af7fdc3e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htmcq" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.759958 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23ffd8bd-e6ca-4824-900e-08ba0cd80041-metrics-certs\") pod \"router-default-5444994796-twd2b\" (UID: \"23ffd8bd-e6ca-4824-900e-08ba0cd80041\") " pod="openshift-ingress/router-default-5444994796-twd2b" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.761922 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.763015 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42ee8d72-7daa-4835-b976-db8e34dfdb3c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6brk2\" (UID: \"42ee8d72-7daa-4835-b976-db8e34dfdb3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6brk2" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.765763 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.772864 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/42497b95-3bd9-480d-9393-db14108c977e-image-import-ca\") pod \"apiserver-76f77b778f-p8t6g\" (UID: \"42497b95-3bd9-480d-9393-db14108c977e\") " pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.792341 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.793702 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42497b95-3bd9-480d-9393-db14108c977e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-p8t6g\" (UID: \"42497b95-3bd9-480d-9393-db14108c977e\") " pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.802555 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.804328 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/42497b95-3bd9-480d-9393-db14108c977e-etcd-serving-ca\") pod \"apiserver-76f77b778f-p8t6g\" (UID: \"42497b95-3bd9-480d-9393-db14108c977e\") " pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.823194 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.843071 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.843329 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:05 crc kubenswrapper[4725]: E1014 13:17:05.843653 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:06.343585618 +0000 UTC m=+143.192020427 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.843973 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:05 crc kubenswrapper[4725]: E1014 13:17:05.844947 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:06.344934476 +0000 UTC m=+143.193369445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.850913 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/42497b95-3bd9-480d-9393-db14108c977e-etcd-client\") pod \"apiserver-76f77b778f-p8t6g\" (UID: \"42497b95-3bd9-480d-9393-db14108c977e\") " pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.862676 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.882529 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.889706 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42497b95-3bd9-480d-9393-db14108c977e-serving-cert\") pod \"apiserver-76f77b778f-p8t6g\" (UID: \"42497b95-3bd9-480d-9393-db14108c977e\") " pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.900577 4725 request.go:700] Waited for 1.014116782s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&limit=500&resourceVersion=0 Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.904127 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.907117 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/42497b95-3bd9-480d-9393-db14108c977e-audit\") pod \"apiserver-76f77b778f-p8t6g\" (UID: \"42497b95-3bd9-480d-9393-db14108c977e\") " pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.935720 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.943270 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.945563 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:05 crc kubenswrapper[4725]: E1014 13:17:05.946021 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:06.445997306 +0000 UTC m=+143.294432115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.946220 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:05 crc kubenswrapper[4725]: E1014 13:17:05.946725 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:06.446717946 +0000 UTC m=+143.295152755 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.962204 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.974972 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aba4ae84-aa5c-4790-a5cc-b2c865bae5dd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hb586\" (UID: \"aba4ae84-aa5c-4790-a5cc-b2c865bae5dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-hb586" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.987226 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Oct 14 13:17:05 crc kubenswrapper[4725]: I1014 13:17:05.994158 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aba4ae84-aa5c-4790-a5cc-b2c865bae5dd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hb586\" (UID: \"aba4ae84-aa5c-4790-a5cc-b2c865bae5dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-hb586" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.003369 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.024351 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.041625 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.046859 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:06 crc kubenswrapper[4725]: E1014 13:17:06.047077 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:06.547050186 +0000 UTC m=+143.395484995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.047272 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:06 crc kubenswrapper[4725]: E1014 13:17:06.048094 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:06.548066174 +0000 UTC m=+143.396500983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.049435 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/46c9f20f-915c-48c9-b079-ed159fa09d70-proxy-tls\") pod \"machine-config-operator-74547568cd-f7scx\" (UID: \"46c9f20f-915c-48c9-b079-ed159fa09d70\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f7scx" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.062840 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.064709 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/46c9f20f-915c-48c9-b079-ed159fa09d70-images\") pod \"machine-config-operator-74547568cd-f7scx\" (UID: \"46c9f20f-915c-48c9-b079-ed159fa09d70\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f7scx" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.081753 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.102634 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.109227 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4432b354-68b6-461c-98f9-11651a4ec51a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-s94lq\" (UID: \"4432b354-68b6-461c-98f9-11651a4ec51a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-s94lq" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.122732 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.143492 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.148844 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:06 crc kubenswrapper[4725]: E1014 13:17:06.149087 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:06.649046452 +0000 UTC m=+143.497481271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.149842 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:06 crc kubenswrapper[4725]: E1014 13:17:06.150303 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:06.650286117 +0000 UTC m=+143.498720936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.162630 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.170022 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/490a96f8-3a20-414a-b664-c2df9a8d373f-webhook-cert\") pod \"packageserver-d55dfcdfc-ftgzg\" (UID: \"490a96f8-3a20-414a-b664-c2df9a8d373f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftgzg" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.176103 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/490a96f8-3a20-414a-b664-c2df9a8d373f-apiservice-cert\") pod \"packageserver-d55dfcdfc-ftgzg\" (UID: \"490a96f8-3a20-414a-b664-c2df9a8d373f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftgzg" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.181863 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.203250 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.207682 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d1a46958-489a-4357-adc6-ad2990dc19cd-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4g6pl\" (UID: \"d1a46958-489a-4357-adc6-ad2990dc19cd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4g6pl" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.208621 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d4f5074-61bb-4e5b-91ce-d8149ddb50f1-secret-volume\") pod \"collect-profiles-29340795-6q4fq\" (UID: \"9d4f5074-61bb-4e5b-91ce-d8149ddb50f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340795-6q4fq" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.216397 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d54b2ace-6596-4bcf-88cb-23f381105f80-profile-collector-cert\") pod \"catalog-operator-68c6474976-f6klx\" (UID: \"d54b2ace-6596-4bcf-88cb-23f381105f80\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f6klx" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.222007 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.228928 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d54b2ace-6596-4bcf-88cb-23f381105f80-srv-cert\") pod \"catalog-operator-68c6474976-f6klx\" (UID: \"d54b2ace-6596-4bcf-88cb-23f381105f80\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f6klx" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.243101 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.247032 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d1a46958-489a-4357-adc6-ad2990dc19cd-srv-cert\") pod \"olm-operator-6b444d44fb-4g6pl\" (UID: \"d1a46958-489a-4357-adc6-ad2990dc19cd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4g6pl" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.251181 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:06 crc kubenswrapper[4725]: E1014 13:17:06.251365 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:06.751343308 +0000 UTC m=+143.599778117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.251572 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:06 crc kubenswrapper[4725]: E1014 13:17:06.251955 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:06.751945204 +0000 UTC m=+143.600380023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.262292 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.266386 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/64be0777-3e55-42b0-8832-8c58f1980f27-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-b9mz9\" (UID: \"64be0777-3e55-42b0-8832-8c58f1980f27\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9mz9" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.282120 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.302521 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.310437 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44854d9f-c150-47e2-b099-66e7a7f483e2-serving-cert\") pod \"service-ca-operator-777779d784-6dvkx\" (UID: \"44854d9f-c150-47e2-b099-66e7a7f483e2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6dvkx" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.322190 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.342248 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.351688 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44854d9f-c150-47e2-b099-66e7a7f483e2-config\") pod \"service-ca-operator-777779d784-6dvkx\" (UID: \"44854d9f-c150-47e2-b099-66e7a7f483e2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6dvkx" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.352792 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:06 crc kubenswrapper[4725]: E1014 13:17:06.352956 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:06.852934183 +0000 UTC m=+143.701368992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.353152 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:06 crc kubenswrapper[4725]: E1014 13:17:06.353518 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:06.853509008 +0000 UTC m=+143.701943817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.362062 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.382343 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.390213 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d4f5074-61bb-4e5b-91ce-d8149ddb50f1-config-volume\") pod \"collect-profiles-29340795-6q4fq\" (UID: \"9d4f5074-61bb-4e5b-91ce-d8149ddb50f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340795-6q4fq" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.402563 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.421997 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.430574 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f19f06ec-8bbf-4d8e-a6ef-f11e032454a9-signing-key\") pod \"service-ca-9c57cc56f-6pc7v\" (UID: \"f19f06ec-8bbf-4d8e-a6ef-f11e032454a9\") " pod="openshift-service-ca/service-ca-9c57cc56f-6pc7v" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.443978 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.445775 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f19f06ec-8bbf-4d8e-a6ef-f11e032454a9-signing-cabundle\") pod \"service-ca-9c57cc56f-6pc7v\" (UID: \"f19f06ec-8bbf-4d8e-a6ef-f11e032454a9\") " pod="openshift-service-ca/service-ca-9c57cc56f-6pc7v" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.454000 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:06 crc kubenswrapper[4725]: E1014 13:17:06.454166 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:06.954144327 +0000 UTC m=+143.802579136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.454423 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:06 crc kubenswrapper[4725]: E1014 13:17:06.455040 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:06.955029972 +0000 UTC m=+143.803464781 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.462171 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.482597 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.502535 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.522708 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.543285 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.547583 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77bd2504-9807-458c-80c3-a61e41bfcef2-config-volume\") pod \"dns-default-zbjgv\" (UID: \"77bd2504-9807-458c-80c3-a61e41bfcef2\") " pod="openshift-dns/dns-default-zbjgv" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.555915 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:06 crc kubenswrapper[4725]: E1014 13:17:06.556077 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:07.056054392 +0000 UTC m=+143.904489201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.556533 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:06 crc kubenswrapper[4725]: E1014 13:17:06.557286 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:07.057251695 +0000 UTC m=+143.905686544 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.573683 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.581810 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.586691 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/77bd2504-9807-458c-80c3-a61e41bfcef2-metrics-tls\") pod \"dns-default-zbjgv\" (UID: \"77bd2504-9807-458c-80c3-a61e41bfcef2\") " pod="openshift-dns/dns-default-zbjgv" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.602746 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.622584 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.634380 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87d34324-1bfd-47d6-8551-7bc545575a4a-cert\") pod \"ingress-canary-zjwcj\" (UID: \"87d34324-1bfd-47d6-8551-7bc545575a4a\") " pod="openshift-ingress-canary/ingress-canary-zjwcj" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.641673 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.657937 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:06 crc kubenswrapper[4725]: E1014 13:17:06.658107 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:07.158082789 +0000 UTC m=+144.006517598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.658639 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:06 crc kubenswrapper[4725]: E1014 13:17:06.659200 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:07.159192879 +0000 UTC m=+144.007627688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.669281 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.682827 4725 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.701929 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.725939 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Oct 14 13:17:06 crc kubenswrapper[4725]: E1014 13:17:06.746151 4725 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Oct 14 13:17:06 crc kubenswrapper[4725]: E1014 13:17:06.746288 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ddb4cc6f-047d-4acb-ad5c-b216f5ceb8ce-node-bootstrap-token podName:ddb4cc6f-047d-4acb-ad5c-b216f5ceb8ce nodeName:}" failed. No retries permitted until 2025-10-14 13:17:07.246264961 +0000 UTC m=+144.094699770 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/ddb4cc6f-047d-4acb-ad5c-b216f5ceb8ce-node-bootstrap-token") pod "machine-config-server-h2rt9" (UID: "ddb4cc6f-047d-4acb-ad5c-b216f5ceb8ce") : failed to sync secret cache: timed out waiting for the condition Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.751549 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.755906 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ddb4cc6f-047d-4acb-ad5c-b216f5ceb8ce-certs\") pod \"machine-config-server-h2rt9\" (UID: \"ddb4cc6f-047d-4acb-ad5c-b216f5ceb8ce\") " pod="openshift-machine-config-operator/machine-config-server-h2rt9" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.760423 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:06 crc kubenswrapper[4725]: E1014 13:17:06.760589 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:07.260554199 +0000 UTC m=+144.108989038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.760944 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:06 crc kubenswrapper[4725]: E1014 13:17:06.761376 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:07.261361061 +0000 UTC m=+144.109795900 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.762936 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.797299 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26kbc\" (UniqueName: \"kubernetes.io/projected/b3fd5e11-b817-4e76-a744-09eefc35c83b-kube-api-access-26kbc\") pod \"openshift-config-operator-7777fb866f-k8lml\" (UID: \"b3fd5e11-b817-4e76-a744-09eefc35c83b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k8lml" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.825930 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp5tx\" (UniqueName: \"kubernetes.io/projected/77a019ee-f99c-4962-9032-6493f34adee4-kube-api-access-kp5tx\") pod \"openshift-controller-manager-operator-756b6f6bc6-k4sc6\" (UID: \"77a019ee-f99c-4962-9032-6493f34adee4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k4sc6" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.840075 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4wsk\" (UniqueName: \"kubernetes.io/projected/f3ea0115-bde6-42c0-b55f-2bd6d9b68d35-kube-api-access-c4wsk\") pod \"machine-approver-56656f9798-tddw7\" (UID: \"f3ea0115-bde6-42c0-b55f-2bd6d9b68d35\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tddw7" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.862814 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:06 crc kubenswrapper[4725]: E1014 13:17:06.863499 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:07.363440609 +0000 UTC m=+144.211875428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.878877 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69z7n\" (UniqueName: \"kubernetes.io/projected/960ecad3-135d-4478-bd6f-b37588dd49bb-kube-api-access-69z7n\") pod \"oauth-openshift-558db77b4-vq7zq\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.899524 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lv4m\" (UniqueName: \"kubernetes.io/projected/ded0ec42-2430-4e32-909c-308aeef7c49a-kube-api-access-8lv4m\") pod \"route-controller-manager-6576b87f9c-462vw\" (UID: \"ded0ec42-2430-4e32-909c-308aeef7c49a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-462vw" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.900822 4725 request.go:700] Waited for 1.882215154s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/serviceaccounts/cluster-samples-operator/token Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.919484 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dt2x\" (UniqueName: \"kubernetes.io/projected/604c84ad-7483-438d-9972-a031afc477f6-kube-api-access-4dt2x\") pod \"cluster-samples-operator-665b6dd947-4pqps\" (UID: \"604c84ad-7483-438d-9972-a031afc477f6\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4pqps" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.934327 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tddw7" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.941818 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc6hp\" (UniqueName: \"kubernetes.io/projected/b8ca672d-5997-4fe0-b717-64e0b07ffbbe-kube-api-access-hc6hp\") pod \"console-operator-58897d9998-dfhsw\" (UID: \"b8ca672d-5997-4fe0-b717-64e0b07ffbbe\") " pod="openshift-console-operator/console-operator-58897d9998-dfhsw" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.949171 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66pz6\" (UniqueName: \"kubernetes.io/projected/4bb4a9f2-74c0-401e-b880-bd17f95b00d2-kube-api-access-66pz6\") pod \"controller-manager-879f6c89f-96fdf\" (UID: \"4bb4a9f2-74c0-401e-b880-bd17f95b00d2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-96fdf" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.955467 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-dfhsw" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.965927 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:06 crc kubenswrapper[4725]: E1014 13:17:06.966435 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:07.466415543 +0000 UTC m=+144.314850362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.969548 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hctgk\" (UniqueName: \"kubernetes.io/projected/1abee1b1-8c1e-43df-89cc-5381a2ef0fc6-kube-api-access-hctgk\") pod \"machine-api-operator-5694c8668f-r985j\" (UID: \"1abee1b1-8c1e-43df-89cc-5381a2ef0fc6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-r985j" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.986018 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4pqps" Oct 14 13:17:06 crc kubenswrapper[4725]: I1014 13:17:06.999370 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/46aacb90-0d57-4080-96db-5e477c100fe8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-s9xxp\" (UID: \"46aacb90-0d57-4080-96db-5e477c100fe8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s9xxp" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.006019 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k4sc6" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.019005 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thnr2\" (UniqueName: \"kubernetes.io/projected/eb0a83ee-f088-424f-b3c6-8ac8e2a50a27-kube-api-access-thnr2\") pod \"etcd-operator-b45778765-2bfkn\" (UID: \"eb0a83ee-f088-424f-b3c6-8ac8e2a50a27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2bfkn" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.036877 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k8lml" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.042185 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdn7m\" (UniqueName: \"kubernetes.io/projected/1e477098-5f8a-4194-9125-806a2d8724ce-kube-api-access-bdn7m\") pod \"authentication-operator-69f744f599-pfnhc\" (UID: \"1e477098-5f8a-4194-9125-806a2d8724ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pfnhc" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.043649 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-462vw" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.060060 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm47q\" (UniqueName: \"kubernetes.io/projected/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-kube-api-access-jm47q\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.071529 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:07 crc kubenswrapper[4725]: E1014 13:17:07.071981 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:07.571942908 +0000 UTC m=+144.420377717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.082713 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:07 crc kubenswrapper[4725]: E1014 13:17:07.083585 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:07.583565131 +0000 UTC m=+144.431999940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.094943 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw8l8\" (UniqueName: \"kubernetes.io/projected/46aacb90-0d57-4080-96db-5e477c100fe8-kube-api-access-qw8l8\") pod \"cluster-image-registry-operator-dc59b4c8b-s9xxp\" (UID: \"46aacb90-0d57-4080-96db-5e477c100fe8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s9xxp" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.101827 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-bound-sa-token\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.124974 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-96fdf" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.125351 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg7w2\" (UniqueName: \"kubernetes.io/projected/9a1f4665-bd0e-4e79-948e-1c1894945013-kube-api-access-gg7w2\") pod \"apiserver-7bbb656c7d-zfnkd\" (UID: \"9a1f4665-bd0e-4e79-948e-1c1894945013\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.140788 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qthm8\" (UniqueName: \"kubernetes.io/projected/e1e76f4a-94bf-473d-9658-be90b7f79e56-kube-api-access-qthm8\") pod \"dns-operator-744455d44c-8b79b\" (UID: \"e1e76f4a-94bf-473d-9658-be90b7f79e56\") " pod="openshift-dns-operator/dns-operator-744455d44c-8b79b" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.141093 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-r985j" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.161969 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.170019 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7148d1c-3586-4dae-a72d-543940574d2e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9rnn7\" (UID: \"a7148d1c-3586-4dae-a72d-543940574d2e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9rnn7" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.183361 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pfnhc" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.186604 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:07 crc kubenswrapper[4725]: E1014 13:17:07.187237 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:07.687197813 +0000 UTC m=+144.535632622 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.188710 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9rnn7" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.197338 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn5nq\" (UniqueName: \"kubernetes.io/projected/10813d7e-3ed3-49a7-a2ad-5aa0db76a25d-kube-api-access-zn5nq\") pod \"control-plane-machine-set-operator-78cbb6b69f-658kg\" (UID: \"10813d7e-3ed3-49a7-a2ad-5aa0db76a25d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-658kg" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.217581 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.218289 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49slr\" (UniqueName: \"kubernetes.io/projected/f19f06ec-8bbf-4d8e-a6ef-f11e032454a9-kube-api-access-49slr\") pod \"service-ca-9c57cc56f-6pc7v\" (UID: \"f19f06ec-8bbf-4d8e-a6ef-f11e032454a9\") " pod="openshift-service-ca/service-ca-9c57cc56f-6pc7v" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.231417 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5zp7\" (UniqueName: \"kubernetes.io/projected/d1a46958-489a-4357-adc6-ad2990dc19cd-kube-api-access-r5zp7\") pod \"olm-operator-6b444d44fb-4g6pl\" (UID: \"d1a46958-489a-4357-adc6-ad2990dc19cd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4g6pl" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.246444 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp7lp\" (UniqueName: \"kubernetes.io/projected/3fd3f30a-c1ac-4a8f-8d95-9d4165d608a2-kube-api-access-xp7lp\") pod \"downloads-7954f5f757-4v6wb\" (UID: \"3fd3f30a-c1ac-4a8f-8d95-9d4165d608a2\") " pod="openshift-console/downloads-7954f5f757-4v6wb" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.265108 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4g6pl" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.265144 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mztb9\" (UniqueName: \"kubernetes.io/projected/4432b354-68b6-461c-98f9-11651a4ec51a-kube-api-access-mztb9\") pod \"multus-admission-controller-857f4d67dd-s94lq\" (UID: \"4432b354-68b6-461c-98f9-11651a4ec51a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-s94lq" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.288744 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ddb4cc6f-047d-4acb-ad5c-b216f5ceb8ce-node-bootstrap-token\") pod \"machine-config-server-h2rt9\" (UID: \"ddb4cc6f-047d-4acb-ad5c-b216f5ceb8ce\") " pod="openshift-machine-config-operator/machine-config-server-h2rt9" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.288840 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:07 crc kubenswrapper[4725]: E1014 13:17:07.289974 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:07.789951251 +0000 UTC m=+144.638386060 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.298379 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxdlz\" (UniqueName: \"kubernetes.io/projected/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-kube-api-access-wxdlz\") pod \"console-f9d7485db-dbshk\" (UID: \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\") " pod="openshift-console/console-f9d7485db-dbshk" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.299260 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t92z\" (UniqueName: \"kubernetes.io/projected/98ec98f8-f0b2-4170-bb1b-49bf82c82c76-kube-api-access-9t92z\") pod \"migrator-59844c95c7-dmz29\" (UID: \"98ec98f8-f0b2-4170-bb1b-49bf82c82c76\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dmz29" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.303433 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ddb4cc6f-047d-4acb-ad5c-b216f5ceb8ce-node-bootstrap-token\") pod \"machine-config-server-h2rt9\" (UID: \"ddb4cc6f-047d-4acb-ad5c-b216f5ceb8ce\") " pod="openshift-machine-config-operator/machine-config-server-h2rt9" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.303811 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-6pc7v" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.316237 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-2bfkn" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.331558 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tpmn\" (UniqueName: \"kubernetes.io/projected/29bc7b45-9968-48c5-be01-2c8b7f39df13-kube-api-access-5tpmn\") pod \"ingress-operator-5b745b69d9-w98cd\" (UID: \"29bc7b45-9968-48c5-be01-2c8b7f39df13\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w98cd" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.338437 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f2s6\" (UniqueName: \"kubernetes.io/projected/aba4ae84-aa5c-4790-a5cc-b2c865bae5dd-kube-api-access-7f2s6\") pod \"marketplace-operator-79b997595-hb586\" (UID: \"aba4ae84-aa5c-4790-a5cc-b2c865bae5dd\") " pod="openshift-marketplace/marketplace-operator-79b997595-hb586" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.355179 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s9xxp" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.361928 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42ee8d72-7daa-4835-b976-db8e34dfdb3c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6brk2\" (UID: \"42ee8d72-7daa-4835-b976-db8e34dfdb3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6brk2" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.366018 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dbshk" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.376673 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6brk2" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.380576 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-8b79b" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.386859 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-658kg" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.390262 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:07 crc kubenswrapper[4725]: E1014 13:17:07.390433 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:07.890401204 +0000 UTC m=+144.738836013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.390714 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:07 crc kubenswrapper[4725]: E1014 13:17:07.391331 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:07.89130819 +0000 UTC m=+144.739742999 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.404984 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dmz29" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.416700 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/218135fe-157d-49e2-b391-acf0af7fdc3e-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-htmcq\" (UID: \"218135fe-157d-49e2-b391-acf0af7fdc3e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htmcq" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.416849 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx7jk\" (UniqueName: \"kubernetes.io/projected/b372f112-486d-4848-a78f-552a485abacc-kube-api-access-mx7jk\") pod \"csi-hostpathplugin-jnvzr\" (UID: \"b372f112-486d-4848-a78f-552a485abacc\") " pod="hostpath-provisioner/csi-hostpathplugin-jnvzr" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.429441 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/29bc7b45-9968-48c5-be01-2c8b7f39df13-bound-sa-token\") pod \"ingress-operator-5b745b69d9-w98cd\" (UID: \"29bc7b45-9968-48c5-be01-2c8b7f39df13\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w98cd" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.444059 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kprnr\" (UniqueName: \"kubernetes.io/projected/42497b95-3bd9-480d-9393-db14108c977e-kube-api-access-kprnr\") pod \"apiserver-76f77b778f-p8t6g\" (UID: \"42497b95-3bd9-480d-9393-db14108c977e\") " pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.471589 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsr6n\" (UniqueName: \"kubernetes.io/projected/64be0777-3e55-42b0-8832-8c58f1980f27-kube-api-access-gsr6n\") pod \"package-server-manager-789f6589d5-b9mz9\" (UID: \"64be0777-3e55-42b0-8832-8c58f1980f27\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9mz9" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.472132 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-4v6wb" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.480682 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htmcq" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.487978 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxpnb\" (UniqueName: \"kubernetes.io/projected/23ffd8bd-e6ca-4824-900e-08ba0cd80041-kube-api-access-xxpnb\") pod \"router-default-5444994796-twd2b\" (UID: \"23ffd8bd-e6ca-4824-900e-08ba0cd80041\") " pod="openshift-ingress/router-default-5444994796-twd2b" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.492159 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:07 crc kubenswrapper[4725]: E1014 13:17:07.492700 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:07.992681379 +0000 UTC m=+144.841116188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.506070 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgqdh\" (UniqueName: \"kubernetes.io/projected/490a96f8-3a20-414a-b664-c2df9a8d373f-kube-api-access-tgqdh\") pod \"packageserver-d55dfcdfc-ftgzg\" (UID: \"490a96f8-3a20-414a-b664-c2df9a8d373f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftgzg" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.507872 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.514395 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hb586" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.527935 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfqsb\" (UniqueName: \"kubernetes.io/projected/9d4f5074-61bb-4e5b-91ce-d8149ddb50f1-kube-api-access-wfqsb\") pod \"collect-profiles-29340795-6q4fq\" (UID: \"9d4f5074-61bb-4e5b-91ce-d8149ddb50f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340795-6q4fq" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.531576 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-s94lq" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.541164 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftgzg" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.547039 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62g5x\" (UniqueName: \"kubernetes.io/projected/ddb4cc6f-047d-4acb-ad5c-b216f5ceb8ce-kube-api-access-62g5x\") pod \"machine-config-server-h2rt9\" (UID: \"ddb4cc6f-047d-4acb-ad5c-b216f5ceb8ce\") " pod="openshift-machine-config-operator/machine-config-server-h2rt9" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.573408 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9mz9" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.579701 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjjmg\" (UniqueName: \"kubernetes.io/projected/e88b4cc4-37fd-43c3-aaad-ea153ece7b28-kube-api-access-jjjmg\") pod \"openshift-apiserver-operator-796bbdcf4f-kq2mg\" (UID: \"e88b4cc4-37fd-43c3-aaad-ea153ece7b28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq2mg" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.587258 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cngjq\" (UniqueName: \"kubernetes.io/projected/d54b2ace-6596-4bcf-88cb-23f381105f80-kube-api-access-cngjq\") pod \"catalog-operator-68c6474976-f6klx\" (UID: \"d54b2ace-6596-4bcf-88cb-23f381105f80\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f6klx" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.593514 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340795-6q4fq" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.594245 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:07 crc kubenswrapper[4725]: E1014 13:17:07.595081 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:08.095058455 +0000 UTC m=+144.943493264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.599572 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8hcg\" (UniqueName: \"kubernetes.io/projected/46c9f20f-915c-48c9-b079-ed159fa09d70-kube-api-access-w8hcg\") pod \"machine-config-operator-74547568cd-f7scx\" (UID: \"46c9f20f-915c-48c9-b079-ed159fa09d70\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f7scx" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.620114 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jskfz\" (UniqueName: \"kubernetes.io/projected/44854d9f-c150-47e2-b099-66e7a7f483e2-kube-api-access-jskfz\") pod \"service-ca-operator-777779d784-6dvkx\" (UID: \"44854d9f-c150-47e2-b099-66e7a7f483e2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6dvkx" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.648079 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-jnvzr" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.658119 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2gxm\" (UniqueName: \"kubernetes.io/projected/4ececce9-bde7-4d8c-b4b8-75b4a8538c04-kube-api-access-h2gxm\") pod \"machine-config-controller-84d6567774-f9xkt\" (UID: \"4ececce9-bde7-4d8c-b4b8-75b4a8538c04\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9xkt" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.664309 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-h2rt9" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.664815 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-749qj\" (UniqueName: \"kubernetes.io/projected/4357984b-72f2-4c52-bae4-1dce4616b0df-kube-api-access-749qj\") pod \"kube-storage-version-migrator-operator-b67b599dd-mspcl\" (UID: \"4357984b-72f2-4c52-bae4-1dce4616b0df\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mspcl" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.688876 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xpnc\" (UniqueName: \"kubernetes.io/projected/87d34324-1bfd-47d6-8551-7bc545575a4a-kube-api-access-6xpnc\") pod \"ingress-canary-zjwcj\" (UID: \"87d34324-1bfd-47d6-8551-7bc545575a4a\") " pod="openshift-ingress-canary/ingress-canary-zjwcj" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.691176 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tddw7" event={"ID":"f3ea0115-bde6-42c0-b55f-2bd6d9b68d35","Type":"ContainerStarted","Data":"d96eeac63ee68b638eebac0a57e873d8c32b4823e60a88dfbb398b75f501a49a"} Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.696057 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:07 crc kubenswrapper[4725]: E1014 13:17:07.696551 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:08.196533098 +0000 UTC m=+145.044967897 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.696649 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq2mg" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.713093 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-462vw"] Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.713478 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mspcl" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.718849 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k4sc6"] Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.719818 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4pqps"] Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.722041 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w98cd" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.734653 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-k8lml"] Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.742702 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvk2x\" (UniqueName: \"kubernetes.io/projected/77bd2504-9807-458c-80c3-a61e41bfcef2-kube-api-access-rvk2x\") pod \"dns-default-zbjgv\" (UID: \"77bd2504-9807-458c-80c3-a61e41bfcef2\") " pod="openshift-dns/dns-default-zbjgv" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.761513 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-twd2b" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.768898 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-dfhsw"] Oct 14 13:17:07 crc kubenswrapper[4725]: W1014 13:17:07.790333 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23ffd8bd_e6ca_4824_900e_08ba0cd80041.slice/crio-d4ff5e8524aa3ba9be9340382eee96cc83c472644dc08a65be2847c104913c21 WatchSource:0}: Error finding container d4ff5e8524aa3ba9be9340382eee96cc83c472644dc08a65be2847c104913c21: Status 404 returned error can't find the container with id d4ff5e8524aa3ba9be9340382eee96cc83c472644dc08a65be2847c104913c21 Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.797256 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9xkt" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.798129 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:07 crc kubenswrapper[4725]: E1014 13:17:07.798598 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:08.298572295 +0000 UTC m=+145.147007284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.838689 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f7scx" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.849978 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f6klx" Oct 14 13:17:07 crc kubenswrapper[4725]: W1014 13:17:07.870916 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77a019ee_f99c_4962_9032_6493f34adee4.slice/crio-b141c2e536b526ed5fb3489b43474eae9395a322a1ee588d380398d5fbda20af WatchSource:0}: Error finding container b141c2e536b526ed5fb3489b43474eae9395a322a1ee588d380398d5fbda20af: Status 404 returned error can't find the container with id b141c2e536b526ed5fb3489b43474eae9395a322a1ee588d380398d5fbda20af Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.881521 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6dvkx" Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.900268 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:07 crc kubenswrapper[4725]: E1014 13:17:07.900742 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:08.400722907 +0000 UTC m=+145.249157716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.914680 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-zbjgv" Oct 14 13:17:07 crc kubenswrapper[4725]: W1014 13:17:07.938467 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3fd5e11_b817_4e76_a744_09eefc35c83b.slice/crio-89f5b51dd13dc495d4613e7d7b70cf0c082e83f5c54179c300f6e08e5bc488a7 WatchSource:0}: Error finding container 89f5b51dd13dc495d4613e7d7b70cf0c082e83f5c54179c300f6e08e5bc488a7: Status 404 returned error can't find the container with id 89f5b51dd13dc495d4613e7d7b70cf0c082e83f5c54179c300f6e08e5bc488a7 Oct 14 13:17:07 crc kubenswrapper[4725]: W1014 13:17:07.954765 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8ca672d_5997_4fe0_b717_64e0b07ffbbe.slice/crio-b81488f0f9f7c74c9888745d8607dc0398bede9cd9bb69c6178709cdafe6f760 WatchSource:0}: Error finding container b81488f0f9f7c74c9888745d8607dc0398bede9cd9bb69c6178709cdafe6f760: Status 404 returned error can't find the container with id b81488f0f9f7c74c9888745d8607dc0398bede9cd9bb69c6178709cdafe6f760 Oct 14 13:17:07 crc kubenswrapper[4725]: I1014 13:17:07.954922 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-zjwcj" Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.006509 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:08 crc kubenswrapper[4725]: E1014 13:17:08.007401 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:08.507380043 +0000 UTC m=+145.355814852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.089937 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-96fdf"] Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.093316 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-r985j"] Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.109375 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:08 crc kubenswrapper[4725]: E1014 13:17:08.109753 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:08.609712928 +0000 UTC m=+145.458147737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.109926 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:08 crc kubenswrapper[4725]: E1014 13:17:08.110408 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:08.610397507 +0000 UTC m=+145.458832316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.213134 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:08 crc kubenswrapper[4725]: E1014 13:17:08.213596 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:08.713577516 +0000 UTC m=+145.562012325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.314496 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:08 crc kubenswrapper[4725]: E1014 13:17:08.314851 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:08.814836842 +0000 UTC m=+145.663271651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.322381 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vq7zq"] Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.416194 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:08 crc kubenswrapper[4725]: E1014 13:17:08.416415 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:08.916374187 +0000 UTC m=+145.764808996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.416965 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:08 crc kubenswrapper[4725]: E1014 13:17:08.417312 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:08.917297792 +0000 UTC m=+145.765732601 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.478277 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pfnhc"] Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.519625 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:08 crc kubenswrapper[4725]: E1014 13:17:08.520149 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:09.020120702 +0000 UTC m=+145.868555511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.621377 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:08 crc kubenswrapper[4725]: E1014 13:17:08.621919 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:09.121896382 +0000 UTC m=+145.970331191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.700234 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-twd2b" event={"ID":"23ffd8bd-e6ca-4824-900e-08ba0cd80041","Type":"ContainerStarted","Data":"f705c000140e2829feb50f8e96d6936ac85a5c71b609a69d73929175e4321983"} Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.700291 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-twd2b" event={"ID":"23ffd8bd-e6ca-4824-900e-08ba0cd80041","Type":"ContainerStarted","Data":"d4ff5e8524aa3ba9be9340382eee96cc83c472644dc08a65be2847c104913c21"} Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.712737 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-dfhsw" event={"ID":"b8ca672d-5997-4fe0-b717-64e0b07ffbbe","Type":"ContainerStarted","Data":"b81488f0f9f7c74c9888745d8607dc0398bede9cd9bb69c6178709cdafe6f760"} Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.717875 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k4sc6" event={"ID":"77a019ee-f99c-4962-9032-6493f34adee4","Type":"ContainerStarted","Data":"b141c2e536b526ed5fb3489b43474eae9395a322a1ee588d380398d5fbda20af"} Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.723731 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:08 crc kubenswrapper[4725]: E1014 13:17:08.723889 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:09.223861187 +0000 UTC m=+146.072295996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.724138 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:08 crc kubenswrapper[4725]: E1014 13:17:08.724581 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:09.224562517 +0000 UTC m=+146.072997326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.728709 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4pqps" event={"ID":"604c84ad-7483-438d-9972-a031afc477f6","Type":"ContainerStarted","Data":"2c4a8305bdcf76ad56a2c91be4b0c3c47cd74c4e92eab6637b59803aeac30a32"} Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.735463 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-r985j" event={"ID":"1abee1b1-8c1e-43df-89cc-5381a2ef0fc6","Type":"ContainerStarted","Data":"110be674e6d07643942cc73b1394be78b70593e737c1141693d7920c0f201544"} Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.737122 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pfnhc" event={"ID":"1e477098-5f8a-4194-9125-806a2d8724ce","Type":"ContainerStarted","Data":"03c42f21e5fe32d20636116739495fdfed73b259491fada4768f536dbc4a9e14"} Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.745380 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tddw7" event={"ID":"f3ea0115-bde6-42c0-b55f-2bd6d9b68d35","Type":"ContainerStarted","Data":"f9f3f4e196ca5b119423e4177292e944bb730c36ca85c958ee28bb139cff652a"} Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.751576 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" event={"ID":"960ecad3-135d-4478-bd6f-b37588dd49bb","Type":"ContainerStarted","Data":"150a783bf5b4a045fbe996138fd897ae7152ad047a5d6ce55eb51a8329c262c4"} Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.754763 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k8lml" event={"ID":"b3fd5e11-b817-4e76-a744-09eefc35c83b","Type":"ContainerStarted","Data":"bf1dc9408f0775107262816f068bd16c8d3aea0be817e00715f1bb8b649324d1"} Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.754838 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k8lml" event={"ID":"b3fd5e11-b817-4e76-a744-09eefc35c83b","Type":"ContainerStarted","Data":"89f5b51dd13dc495d4613e7d7b70cf0c082e83f5c54179c300f6e08e5bc488a7"} Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.761285 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-h2rt9" event={"ID":"ddb4cc6f-047d-4acb-ad5c-b216f5ceb8ce","Type":"ContainerStarted","Data":"95fa10debe1d7809e7be85baa053559144f434108316b8b60bde1bceeff43df9"} Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.761339 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-h2rt9" event={"ID":"ddb4cc6f-047d-4acb-ad5c-b216f5ceb8ce","Type":"ContainerStarted","Data":"60f8e5842715363a2c093dfa77d925fd513baf7f0eee99a7721e7da93965aa73"} Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.761954 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-twd2b" Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.763515 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-462vw" event={"ID":"ded0ec42-2430-4e32-909c-308aeef7c49a","Type":"ContainerStarted","Data":"d3bf45d10317eabc4968caa5319d5a0cc2d85243e9e869a30c7884615c3a9347"} Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.763612 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-462vw" event={"ID":"ded0ec42-2430-4e32-909c-308aeef7c49a","Type":"ContainerStarted","Data":"c39e78d293aadc95081d7f93f2091dc4de458864bec628ef6655b60b35a42eb2"} Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.763963 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-462vw" Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.765557 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-96fdf" event={"ID":"4bb4a9f2-74c0-401e-b880-bd17f95b00d2","Type":"ContainerStarted","Data":"a72738f990f069adffe21122c814d062d74af3de371cfce48438d3a5098c8d79"} Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.768011 4725 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-462vw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.768062 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-462vw" podUID="ded0ec42-2430-4e32-909c-308aeef7c49a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.825806 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:08 crc kubenswrapper[4725]: E1014 13:17:08.827442 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:09.327395026 +0000 UTC m=+146.175829835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.864420 4725 patch_prober.go:28] interesting pod/router-default-5444994796-twd2b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:17:08 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 14 13:17:08 crc kubenswrapper[4725]: [+]process-running ok Oct 14 13:17:08 crc kubenswrapper[4725]: healthz check failed Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.864548 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-twd2b" podUID="23ffd8bd-e6ca-4824-900e-08ba0cd80041" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:17:08 crc kubenswrapper[4725]: I1014 13:17:08.927485 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:08 crc kubenswrapper[4725]: E1014 13:17:08.929075 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:09.429051014 +0000 UTC m=+146.277486013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.031713 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.026428 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2bfkn"] Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.032719 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6pc7v"] Oct 14 13:17:09 crc kubenswrapper[4725]: E1014 13:17:09.031896 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:09.531857472 +0000 UTC m=+146.380292281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.032935 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:09 crc kubenswrapper[4725]: E1014 13:17:09.033743 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:09.533724125 +0000 UTC m=+146.382158934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.035669 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s9xxp"] Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.055799 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9rnn7"] Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.135344 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:09 crc kubenswrapper[4725]: E1014 13:17:09.135841 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:09.635815333 +0000 UTC m=+146.484250202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.252421 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4g6pl"] Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.254188 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:09 crc kubenswrapper[4725]: E1014 13:17:09.254570 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:09.754551496 +0000 UTC m=+146.602986305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.329905 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-dmz29"] Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.350424 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-dbshk"] Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.359704 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd"] Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.366962 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:09 crc kubenswrapper[4725]: E1014 13:17:09.367607 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:09.867587799 +0000 UTC m=+146.716022608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.373591 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8b79b"] Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.373980 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-658kg"] Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.378819 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-p8t6g"] Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.470693 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:09 crc kubenswrapper[4725]: E1014 13:17:09.471255 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:09.971237542 +0000 UTC m=+146.819672351 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.481650 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-w98cd"] Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.511831 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hb586"] Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.514481 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq2mg"] Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.515809 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-twd2b" podStartSLOduration=124.515797141 podStartE2EDuration="2m4.515797141s" podCreationTimestamp="2025-10-14 13:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:09.4589576 +0000 UTC m=+146.307392429" watchObservedRunningTime="2025-10-14 13:17:09.515797141 +0000 UTC m=+146.364231950" Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.518688 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6brk2"] Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.520225 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-s94lq"] Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.521923 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340795-6q4fq"] Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.522679 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-462vw" podStartSLOduration=123.522664842 podStartE2EDuration="2m3.522664842s" podCreationTimestamp="2025-10-14 13:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:09.483531524 +0000 UTC m=+146.331966343" watchObservedRunningTime="2025-10-14 13:17:09.522664842 +0000 UTC m=+146.371099651" Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.531703 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftgzg"] Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.537566 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9mz9"] Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.537799 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f6klx"] Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.546932 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-4v6wb"] Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.547171 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-h2rt9" podStartSLOduration=5.547150583 podStartE2EDuration="5.547150583s" podCreationTimestamp="2025-10-14 13:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:09.52618626 +0000 UTC m=+146.374621069" watchObservedRunningTime="2025-10-14 13:17:09.547150583 +0000 UTC m=+146.395585392" Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.574181 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:09 crc kubenswrapper[4725]: E1014 13:17:09.574364 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:10.074338109 +0000 UTC m=+146.922772918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.574670 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:09 crc kubenswrapper[4725]: E1014 13:17:09.575726 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:10.075707807 +0000 UTC m=+146.924142616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:09 crc kubenswrapper[4725]: W1014 13:17:09.588736 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaba4ae84_aa5c_4790_a5cc_b2c865bae5dd.slice/crio-38dfb4112cbe4f581c74620e3056bbe8fa68a2b77f99fd0231ec72110d8f003d WatchSource:0}: Error finding container 38dfb4112cbe4f581c74620e3056bbe8fa68a2b77f99fd0231ec72110d8f003d: Status 404 returned error can't find the container with id 38dfb4112cbe4f581c74620e3056bbe8fa68a2b77f99fd0231ec72110d8f003d Oct 14 13:17:09 crc kubenswrapper[4725]: W1014 13:17:09.591919 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fd3f30a_c1ac_4a8f_8d95_9d4165d608a2.slice/crio-5dc367adb71251acc8d33eed6c58f0fd20bc709aca62b9721d7f380501583dfc WatchSource:0}: Error finding container 5dc367adb71251acc8d33eed6c58f0fd20bc709aca62b9721d7f380501583dfc: Status 404 returned error can't find the container with id 5dc367adb71251acc8d33eed6c58f0fd20bc709aca62b9721d7f380501583dfc Oct 14 13:17:09 crc kubenswrapper[4725]: W1014 13:17:09.596636 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod490a96f8_3a20_414a_b664_c2df9a8d373f.slice/crio-5dbb7a4395e446a63c47f1f197ac3b44d33812a8f4352ff183116a06941d47b2 WatchSource:0}: Error finding container 5dbb7a4395e446a63c47f1f197ac3b44d33812a8f4352ff183116a06941d47b2: Status 404 returned error can't find the container with id 5dbb7a4395e446a63c47f1f197ac3b44d33812a8f4352ff183116a06941d47b2 Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.636223 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mspcl"] Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.664487 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-zjwcj"] Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.665922 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-zbjgv"] Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.679377 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:09 crc kubenswrapper[4725]: E1014 13:17:09.680044 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:10.179999508 +0000 UTC m=+147.028434317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.698805 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-f9xkt"] Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.708999 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6dvkx"] Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.712155 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-f7scx"] Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.732890 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htmcq"] Oct 14 13:17:09 crc kubenswrapper[4725]: W1014 13:17:09.737670 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77bd2504_9807_458c_80c3_a61e41bfcef2.slice/crio-4a193ca88795cdab76d58f87e815fc93c0f553860a2d9515e048515b011120e7 WatchSource:0}: Error finding container 4a193ca88795cdab76d58f87e815fc93c0f553860a2d9515e048515b011120e7: Status 404 returned error can't find the container with id 4a193ca88795cdab76d58f87e815fc93c0f553860a2d9515e048515b011120e7 Oct 14 13:17:09 crc kubenswrapper[4725]: W1014 13:17:09.749392 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44854d9f_c150_47e2_b099_66e7a7f483e2.slice/crio-7e041c4c1ecb872225e85da5a2203597acb375e2e59c710759b8f3bbdca797c5 WatchSource:0}: Error finding container 7e041c4c1ecb872225e85da5a2203597acb375e2e59c710759b8f3bbdca797c5: Status 404 returned error can't find the container with id 7e041c4c1ecb872225e85da5a2203597acb375e2e59c710759b8f3bbdca797c5 Oct 14 13:17:09 crc kubenswrapper[4725]: W1014 13:17:09.754680 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ececce9_bde7_4d8c_b4b8_75b4a8538c04.slice/crio-5c00c3b8afa73f7726d8749b1cd43c20403e075afa32cf2239d49d573dc96022 WatchSource:0}: Error finding container 5c00c3b8afa73f7726d8749b1cd43c20403e075afa32cf2239d49d573dc96022: Status 404 returned error can't find the container with id 5c00c3b8afa73f7726d8749b1cd43c20403e075afa32cf2239d49d573dc96022 Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.755981 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jnvzr"] Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.766908 4725 patch_prober.go:28] interesting pod/router-default-5444994796-twd2b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:17:09 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 14 13:17:09 crc kubenswrapper[4725]: [+]process-running ok Oct 14 13:17:09 crc kubenswrapper[4725]: healthz check failed Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.766978 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-twd2b" podUID="23ffd8bd-e6ca-4824-900e-08ba0cd80041" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.777241 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zbjgv" event={"ID":"77bd2504-9807-458c-80c3-a61e41bfcef2","Type":"ContainerStarted","Data":"4a193ca88795cdab76d58f87e815fc93c0f553860a2d9515e048515b011120e7"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.781019 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:09 crc kubenswrapper[4725]: W1014 13:17:09.781392 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46c9f20f_915c_48c9_b079_ed159fa09d70.slice/crio-f0663ace6bce99818a297d2e848a6ba1c62e29833662ab11871638f673c35a73 WatchSource:0}: Error finding container f0663ace6bce99818a297d2e848a6ba1c62e29833662ab11871638f673c35a73: Status 404 returned error can't find the container with id f0663ace6bce99818a297d2e848a6ba1c62e29833662ab11871638f673c35a73 Oct 14 13:17:09 crc kubenswrapper[4725]: W1014 13:17:09.781913 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb372f112_486d_4848_a78f_552a485abacc.slice/crio-4de848d63fd2e3146b198a1888ec705b074d38b9c2c1a5c98e8d2f180e41d439 WatchSource:0}: Error finding container 4de848d63fd2e3146b198a1888ec705b074d38b9c2c1a5c98e8d2f180e41d439: Status 404 returned error can't find the container with id 4de848d63fd2e3146b198a1888ec705b074d38b9c2c1a5c98e8d2f180e41d439 Oct 14 13:17:09 crc kubenswrapper[4725]: E1014 13:17:09.782001 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:10.281988254 +0000 UTC m=+147.130423063 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.787299 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-658kg" event={"ID":"10813d7e-3ed3-49a7-a2ad-5aa0db76a25d","Type":"ContainerStarted","Data":"e24482cd5b85f747078d8d7973e3718566f883bb04de1b638af34a8089570b12"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.789339 4725 generic.go:334] "Generic (PLEG): container finished" podID="b3fd5e11-b817-4e76-a744-09eefc35c83b" containerID="bf1dc9408f0775107262816f068bd16c8d3aea0be817e00715f1bb8b649324d1" exitCode=0 Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.789890 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k8lml" event={"ID":"b3fd5e11-b817-4e76-a744-09eefc35c83b","Type":"ContainerDied","Data":"bf1dc9408f0775107262816f068bd16c8d3aea0be817e00715f1bb8b649324d1"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.791913 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k4sc6" event={"ID":"77a019ee-f99c-4962-9032-6493f34adee4","Type":"ContainerStarted","Data":"3bf30316ff963fbb4d242613ec2eff8021dc06d65a2c07dd0e4e329dc30da827"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.794538 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4pqps" event={"ID":"604c84ad-7483-438d-9972-a031afc477f6","Type":"ContainerStarted","Data":"c2a926bebb867910db618594ebe715be5881a62953964d6085cbdfd9b8a3fd18"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.794594 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4pqps" event={"ID":"604c84ad-7483-438d-9972-a031afc477f6","Type":"ContainerStarted","Data":"2a3ddf1b979067726a8bfeb9a514abaa34e7cbd470bc3c05ac29da93f672d0bc"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.795837 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mspcl" event={"ID":"4357984b-72f2-4c52-bae4-1dce4616b0df","Type":"ContainerStarted","Data":"5593beb1792cf39c8d28903969b6967ee0d268f04c56fe80cf56c305a7b13903"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.796893 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" event={"ID":"42497b95-3bd9-480d-9393-db14108c977e","Type":"ContainerStarted","Data":"0f8038731ae0b72b278d318ff2be119f1b099a19448e76ab6fce352489c5d9f8"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.798179 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340795-6q4fq" event={"ID":"9d4f5074-61bb-4e5b-91ce-d8149ddb50f1","Type":"ContainerStarted","Data":"152b0b9b75c05f0726ec8e3fed0a3e6f2484cba7f78776d0b5835e3cec9d099a"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.799089 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dbshk" event={"ID":"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9","Type":"ContainerStarted","Data":"f9c175e3e9d30619e7e0ce4c384cb7042becf31a458a23871c44d73b87245a25"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.803384 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftgzg" event={"ID":"490a96f8-3a20-414a-b664-c2df9a8d373f","Type":"ContainerStarted","Data":"5dbb7a4395e446a63c47f1f197ac3b44d33812a8f4352ff183116a06941d47b2"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.805412 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" event={"ID":"9a1f4665-bd0e-4e79-948e-1c1894945013","Type":"ContainerStarted","Data":"7cc8b1dd6ed04d4a50d9cfb4433b25ff38695e21a3b40c2550676c8a7535173e"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.810176 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-r985j" event={"ID":"1abee1b1-8c1e-43df-89cc-5381a2ef0fc6","Type":"ContainerStarted","Data":"ec364c11f488e5fffd6a38bbebf656d7dec86c898bc92ffe2a6ec2b1fe68278f"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.810234 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-r985j" event={"ID":"1abee1b1-8c1e-43df-89cc-5381a2ef0fc6","Type":"ContainerStarted","Data":"b5094d9fcf5223311d1857c77e0fa6cba04744182ef378ac136763e1167ecff0"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.812999 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9xkt" event={"ID":"4ececce9-bde7-4d8c-b4b8-75b4a8538c04","Type":"ContainerStarted","Data":"5c00c3b8afa73f7726d8749b1cd43c20403e075afa32cf2239d49d573dc96022"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.815707 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" event={"ID":"960ecad3-135d-4478-bd6f-b37588dd49bb","Type":"ContainerStarted","Data":"7bb03f40be61bdbaaedd6343cac6b2580e63c59d74271d8ecc7c5e26d5e4e3e3"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.816054 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.818310 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tddw7" event={"ID":"f3ea0115-bde6-42c0-b55f-2bd6d9b68d35","Type":"ContainerStarted","Data":"b24cd2515ef02824c0afd7191336afe87378a402db633051dfde155416d81c31"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.819065 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9rnn7" event={"ID":"a7148d1c-3586-4dae-a72d-543940574d2e","Type":"ContainerStarted","Data":"4a35ad319f3959cd5cfcc737d8068b510b2dd7be38fc9f92c820ee6bcc193d58"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.820501 4725 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-vq7zq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" start-of-body= Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.820560 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" podUID="960ecad3-135d-4478-bd6f-b37588dd49bb" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.820772 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-6pc7v" event={"ID":"f19f06ec-8bbf-4d8e-a6ef-f11e032454a9","Type":"ContainerStarted","Data":"020dbabf54cb2f1fb5f414ef4fe9a2288a2d1b8bfd6b2fba8a9b183f14631ac6"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.820811 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-6pc7v" event={"ID":"f19f06ec-8bbf-4d8e-a6ef-f11e032454a9","Type":"ContainerStarted","Data":"d4ec4ae8af51494cae7644122acf472052ce20b802bd2651cd711a6699869da8"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.828441 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w98cd" event={"ID":"29bc7b45-9968-48c5-be01-2c8b7f39df13","Type":"ContainerStarted","Data":"685828dd8f8a6088002f968998f59791c561ed01b8f210bf17499c0ec125f088"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.832536 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k4sc6" podStartSLOduration=124.832515939 podStartE2EDuration="2m4.832515939s" podCreationTimestamp="2025-10-14 13:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:09.831894272 +0000 UTC m=+146.680329081" watchObservedRunningTime="2025-10-14 13:17:09.832515939 +0000 UTC m=+146.680950768" Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.836212 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-2bfkn" event={"ID":"eb0a83ee-f088-424f-b3c6-8ac8e2a50a27","Type":"ContainerStarted","Data":"87d637190f6de51793ec1d8445f222750ae9ab7ae9cbea66cd4591fdd7b2be21"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.844518 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hb586" event={"ID":"aba4ae84-aa5c-4790-a5cc-b2c865bae5dd","Type":"ContainerStarted","Data":"38dfb4112cbe4f581c74620e3056bbe8fa68a2b77f99fd0231ec72110d8f003d"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.854751 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6dvkx" event={"ID":"44854d9f-c150-47e2-b099-66e7a7f483e2","Type":"ContainerStarted","Data":"7e041c4c1ecb872225e85da5a2203597acb375e2e59c710759b8f3bbdca797c5"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.856547 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4pqps" podStartSLOduration=124.856523197 podStartE2EDuration="2m4.856523197s" podCreationTimestamp="2025-10-14 13:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:09.855520918 +0000 UTC m=+146.703955747" watchObservedRunningTime="2025-10-14 13:17:09.856523197 +0000 UTC m=+146.704958006" Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.873985 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-dfhsw" event={"ID":"b8ca672d-5997-4fe0-b717-64e0b07ffbbe","Type":"ContainerStarted","Data":"840661d454d6313dcbcedb503871ee31d5f2f95395c5b1839c098753792d020c"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.875399 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-dfhsw" Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.879127 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-96fdf" event={"ID":"4bb4a9f2-74c0-401e-b880-bd17f95b00d2","Type":"ContainerStarted","Data":"3cafaa302c96987c9d7e591aabfe3dcecfb6481bd33cce82016808bb8b1ff58e"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.879894 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-96fdf" Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.885111 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.886354 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-r985j" podStartSLOduration=123.886319785 podStartE2EDuration="2m3.886319785s" podCreationTimestamp="2025-10-14 13:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:09.878162488 +0000 UTC m=+146.726597317" watchObservedRunningTime="2025-10-14 13:17:09.886319785 +0000 UTC m=+146.734754594" Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.887903 4725 patch_prober.go:28] interesting pod/console-operator-58897d9998-dfhsw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.888016 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-dfhsw" podUID="b8ca672d-5997-4fe0-b717-64e0b07ffbbe" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Oct 14 13:17:09 crc kubenswrapper[4725]: E1014 13:17:09.888251 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:10.388207488 +0000 UTC m=+147.236642307 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.888407 4725 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-96fdf container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.888635 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-96fdf" podUID="4bb4a9f2-74c0-401e-b880-bd17f95b00d2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.893815 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-zjwcj" event={"ID":"87d34324-1bfd-47d6-8551-7bc545575a4a","Type":"ContainerStarted","Data":"0a70b43c8e5d703fc19832ab079c209074106605fc9ccbe973492589eb1b1581"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.903183 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s9xxp" event={"ID":"46aacb90-0d57-4080-96db-5e477c100fe8","Type":"ContainerStarted","Data":"8f37129b81bbec6967b3e6ef2263c8a54e7c887e4df4f42e7d5f8f2408153602"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.903252 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s9xxp" event={"ID":"46aacb90-0d57-4080-96db-5e477c100fe8","Type":"ContainerStarted","Data":"7c11e7074c698d0d5511a1bf141146bd79ef7ed609241810b0874791df6be2b7"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.915384 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9mz9" event={"ID":"64be0777-3e55-42b0-8832-8c58f1980f27","Type":"ContainerStarted","Data":"ec78495ebf530ff54da2a3d48fd727f94c2e142c36910b254812004f9907fc37"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.928321 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" podStartSLOduration=124.928294182 podStartE2EDuration="2m4.928294182s" podCreationTimestamp="2025-10-14 13:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:09.918096239 +0000 UTC m=+146.766531058" watchObservedRunningTime="2025-10-14 13:17:09.928294182 +0000 UTC m=+146.776729001" Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.953176 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dmz29" event={"ID":"98ec98f8-f0b2-4170-bb1b-49bf82c82c76","Type":"ContainerStarted","Data":"3870800f0ac03daba153c244b9381213dfb155bbb19a8fca6d42030d892a33df"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.968230 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq2mg" event={"ID":"e88b4cc4-37fd-43c3-aaad-ea153ece7b28","Type":"ContainerStarted","Data":"986b8b49ac2fe5b5ac1ff60065d20335fab4281b0d28a743fd4e2ca762a400e1"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.972431 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-s94lq" event={"ID":"4432b354-68b6-461c-98f9-11651a4ec51a","Type":"ContainerStarted","Data":"02fdbd35e338401846d591ed3cab1bbe1312ac75d421440156bde45a17ac3a0f"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.979133 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-dfhsw" podStartSLOduration=124.979107025 podStartE2EDuration="2m4.979107025s" podCreationTimestamp="2025-10-14 13:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:09.978951621 +0000 UTC m=+146.827386430" watchObservedRunningTime="2025-10-14 13:17:09.979107025 +0000 UTC m=+146.827541844" Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.980831 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tddw7" podStartSLOduration=124.980820233 podStartE2EDuration="2m4.980820233s" podCreationTimestamp="2025-10-14 13:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:09.952071164 +0000 UTC m=+146.800505983" watchObservedRunningTime="2025-10-14 13:17:09.980820233 +0000 UTC m=+146.829255052" Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.984645 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4g6pl" event={"ID":"d1a46958-489a-4357-adc6-ad2990dc19cd","Type":"ContainerStarted","Data":"9a11efc11731a546058afd21a7a7dd561edd8b9af31e3da6090aa5e431248c3f"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.985995 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4g6pl" Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.986619 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f6klx" event={"ID":"d54b2ace-6596-4bcf-88cb-23f381105f80","Type":"ContainerStarted","Data":"00e258f4d8c934e3215e79b043d21af0d9f46ccb73774959ba0eaa759691332d"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.992687 4725 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-4g6pl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" start-of-body= Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.992766 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4g6pl" podUID="d1a46958-489a-4357-adc6-ad2990dc19cd" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.993334 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4v6wb" event={"ID":"3fd3f30a-c1ac-4a8f-8d95-9d4165d608a2","Type":"ContainerStarted","Data":"5dc367adb71251acc8d33eed6c58f0fd20bc709aca62b9721d7f380501583dfc"} Oct 14 13:17:09 crc kubenswrapper[4725]: I1014 13:17:09.994531 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:10 crc kubenswrapper[4725]: E1014 13:17:10.000929 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:10.500889841 +0000 UTC m=+147.349324820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:10 crc kubenswrapper[4725]: I1014 13:17:10.003146 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pfnhc" event={"ID":"1e477098-5f8a-4194-9125-806a2d8724ce","Type":"ContainerStarted","Data":"8f0d26258a415594086d6391a4910e3e03efe1f0665b73e977016ba8ed38be39"} Oct 14 13:17:10 crc kubenswrapper[4725]: I1014 13:17:10.012863 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-6pc7v" podStartSLOduration=124.012822844 podStartE2EDuration="2m4.012822844s" podCreationTimestamp="2025-10-14 13:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:09.998653239 +0000 UTC m=+146.847088048" watchObservedRunningTime="2025-10-14 13:17:10.012822844 +0000 UTC m=+146.861257663" Oct 14 13:17:10 crc kubenswrapper[4725]: I1014 13:17:10.014392 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6brk2" event={"ID":"42ee8d72-7daa-4835-b976-db8e34dfdb3c","Type":"ContainerStarted","Data":"c32395a69ac282e5802de0d372962f8c6dacdc0a8c29616cb4c3d55a47ede029"} Oct 14 13:17:10 crc kubenswrapper[4725]: I1014 13:17:10.028901 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8b79b" event={"ID":"e1e76f4a-94bf-473d-9658-be90b7f79e56","Type":"ContainerStarted","Data":"9274c23386cc8665aaf3e1952dbc2d6a77c59d52fee6b3c7f7ed6518d278eb28"} Oct 14 13:17:10 crc kubenswrapper[4725]: I1014 13:17:10.030398 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-s9xxp" podStartSLOduration=125.030376441 podStartE2EDuration="2m5.030376441s" podCreationTimestamp="2025-10-14 13:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:10.028935141 +0000 UTC m=+146.877369940" watchObservedRunningTime="2025-10-14 13:17:10.030376441 +0000 UTC m=+146.878811240" Oct 14 13:17:10 crc kubenswrapper[4725]: I1014 13:17:10.052639 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq2mg" podStartSLOduration=125.05260606 podStartE2EDuration="2m5.05260606s" podCreationTimestamp="2025-10-14 13:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:10.049231996 +0000 UTC m=+146.897666815" watchObservedRunningTime="2025-10-14 13:17:10.05260606 +0000 UTC m=+146.901040889" Oct 14 13:17:10 crc kubenswrapper[4725]: I1014 13:17:10.093007 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4g6pl" podStartSLOduration=124.092979022 podStartE2EDuration="2m4.092979022s" podCreationTimestamp="2025-10-14 13:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:10.089957208 +0000 UTC m=+146.938392047" watchObservedRunningTime="2025-10-14 13:17:10.092979022 +0000 UTC m=+146.941413831" Oct 14 13:17:10 crc kubenswrapper[4725]: I1014 13:17:10.101314 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:10 crc kubenswrapper[4725]: E1014 13:17:10.101782 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:10.601667644 +0000 UTC m=+147.450102453 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:10 crc kubenswrapper[4725]: I1014 13:17:10.102956 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:10 crc kubenswrapper[4725]: E1014 13:17:10.104978 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:10.604966205 +0000 UTC m=+147.453401014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:10 crc kubenswrapper[4725]: I1014 13:17:10.110177 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-96fdf" podStartSLOduration=125.11015413 podStartE2EDuration="2m5.11015413s" podCreationTimestamp="2025-10-14 13:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:10.109011978 +0000 UTC m=+146.957446807" watchObservedRunningTime="2025-10-14 13:17:10.11015413 +0000 UTC m=+146.958588939" Oct 14 13:17:10 crc kubenswrapper[4725]: I1014 13:17:10.132625 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-pfnhc" podStartSLOduration=125.132583513 podStartE2EDuration="2m5.132583513s" podCreationTimestamp="2025-10-14 13:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:10.132353637 +0000 UTC m=+146.980788456" watchObservedRunningTime="2025-10-14 13:17:10.132583513 +0000 UTC m=+146.981018322" Oct 14 13:17:10 crc kubenswrapper[4725]: I1014 13:17:10.204309 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:10 crc kubenswrapper[4725]: E1014 13:17:10.204833 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:10.704811862 +0000 UTC m=+147.553246671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:10 crc kubenswrapper[4725]: I1014 13:17:10.232606 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-462vw" Oct 14 13:17:10 crc kubenswrapper[4725]: I1014 13:17:10.308080 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:10 crc kubenswrapper[4725]: E1014 13:17:10.308639 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:10.808612579 +0000 UTC m=+147.657047388 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:10 crc kubenswrapper[4725]: I1014 13:17:10.412466 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:10 crc kubenswrapper[4725]: E1014 13:17:10.412691 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:10.912655052 +0000 UTC m=+147.761089871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:10 crc kubenswrapper[4725]: I1014 13:17:10.413534 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:10 crc kubenswrapper[4725]: E1014 13:17:10.436527 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:10.936500996 +0000 UTC m=+147.784935815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:10 crc kubenswrapper[4725]: I1014 13:17:10.519697 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:10 crc kubenswrapper[4725]: E1014 13:17:10.520183 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:11.020163893 +0000 UTC m=+147.868598702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:10 crc kubenswrapper[4725]: I1014 13:17:10.620950 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:10 crc kubenswrapper[4725]: E1014 13:17:10.621362 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:11.121347076 +0000 UTC m=+147.969781885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:10 crc kubenswrapper[4725]: I1014 13:17:10.723883 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:10 crc kubenswrapper[4725]: E1014 13:17:10.724333 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:11.224290309 +0000 UTC m=+148.072725128 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:10 crc kubenswrapper[4725]: I1014 13:17:10.724819 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:10 crc kubenswrapper[4725]: E1014 13:17:10.725187 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:11.225171564 +0000 UTC m=+148.073606373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:10 crc kubenswrapper[4725]: I1014 13:17:10.771595 4725 patch_prober.go:28] interesting pod/router-default-5444994796-twd2b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:17:10 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 14 13:17:10 crc kubenswrapper[4725]: [+]process-running ok Oct 14 13:17:10 crc kubenswrapper[4725]: healthz check failed Oct 14 13:17:10 crc kubenswrapper[4725]: I1014 13:17:10.771674 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-twd2b" podUID="23ffd8bd-e6ca-4824-900e-08ba0cd80041" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:17:10 crc kubenswrapper[4725]: I1014 13:17:10.826512 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:10 crc kubenswrapper[4725]: E1014 13:17:10.827086 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:11.327042056 +0000 UTC m=+148.175476875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:10 crc kubenswrapper[4725]: I1014 13:17:10.947733 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:10 crc kubenswrapper[4725]: E1014 13:17:10.948930 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:11.448909706 +0000 UTC m=+148.297344515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.050627 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:11 crc kubenswrapper[4725]: E1014 13:17:11.051445 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:11.551419586 +0000 UTC m=+148.399854395 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.056844 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9mz9" event={"ID":"64be0777-3e55-42b0-8832-8c58f1980f27","Type":"ContainerStarted","Data":"d0b453d729c25628f568ca7f3eda8cf0dd3759233bfeab42e4ffee12419b1d4f"} Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.079374 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w98cd" event={"ID":"29bc7b45-9968-48c5-be01-2c8b7f39df13","Type":"ContainerStarted","Data":"c4cc9fa87d31ec51707cf3e0dc97b9a4dea8725872c26c436358e62a5d1c45c5"} Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.113310 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kq2mg" event={"ID":"e88b4cc4-37fd-43c3-aaad-ea153ece7b28","Type":"ContainerStarted","Data":"ba4fa41e5005831da821e87b3c076e40f17b04efb067bc90970468e649aaba2b"} Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.122780 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9xkt" event={"ID":"4ececce9-bde7-4d8c-b4b8-75b4a8538c04","Type":"ContainerStarted","Data":"b73a1971034a69f0f3c0bf49f02c4886a13f7f7b3dcb3b8c4bc75a9d7ef33b6d"} Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.125021 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-658kg" event={"ID":"10813d7e-3ed3-49a7-a2ad-5aa0db76a25d","Type":"ContainerStarted","Data":"19947f9deea9753041964f8f8cf2dbdd19a07d04f613219cc40a651234460a7a"} Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.128869 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8b79b" event={"ID":"e1e76f4a-94bf-473d-9658-be90b7f79e56","Type":"ContainerStarted","Data":"c2374fc9762b5f41d37097a40451010070930ed3a02ca32b88f361644e83981d"} Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.131824 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftgzg" event={"ID":"490a96f8-3a20-414a-b664-c2df9a8d373f","Type":"ContainerStarted","Data":"ba2cf25c59d735f0500f93767ade1a436aac744a9cb34895815ed3ceb988a268"} Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.133117 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftgzg" Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.141098 4725 generic.go:334] "Generic (PLEG): container finished" podID="9a1f4665-bd0e-4e79-948e-1c1894945013" containerID="86981af2cc4e1efaabaa85a1c1b320a23be448d057aeb2f09382df04dc747c35" exitCode=0 Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.141165 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" event={"ID":"9a1f4665-bd0e-4e79-948e-1c1894945013","Type":"ContainerDied","Data":"86981af2cc4e1efaabaa85a1c1b320a23be448d057aeb2f09382df04dc747c35"} Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.147773 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-658kg" podStartSLOduration=126.147760516 podStartE2EDuration="2m6.147760516s" podCreationTimestamp="2025-10-14 13:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:11.14612655 +0000 UTC m=+147.994561359" watchObservedRunningTime="2025-10-14 13:17:11.147760516 +0000 UTC m=+147.996195325" Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.151048 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-s94lq" event={"ID":"4432b354-68b6-461c-98f9-11651a4ec51a","Type":"ContainerStarted","Data":"e25b0336225dbdd0da177c0b0c39fb3cd1eac0d70c5843434c2bfba460448f4e"} Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.152541 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.155019 4725 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-ftgzg container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" start-of-body= Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.155076 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftgzg" podUID="490a96f8-3a20-414a-b664-c2df9a8d373f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" Oct 14 13:17:11 crc kubenswrapper[4725]: E1014 13:17:11.155403 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:11.655388707 +0000 UTC m=+148.503823516 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.157917 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340795-6q4fq" event={"ID":"9d4f5074-61bb-4e5b-91ce-d8149ddb50f1","Type":"ContainerStarted","Data":"181f5706bdea2990a91001901bbec7f8d3461b974f9ffee1adfde16ddb54a858"} Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.173301 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6dvkx" event={"ID":"44854d9f-c150-47e2-b099-66e7a7f483e2","Type":"ContainerStarted","Data":"1b910c4f76ec32b0d170a2370c29ef26a22c13d7c364bd693d6dd7886bef2445"} Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.178982 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftgzg" podStartSLOduration=125.178961783 podStartE2EDuration="2m5.178961783s" podCreationTimestamp="2025-10-14 13:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:11.178324976 +0000 UTC m=+148.026759805" watchObservedRunningTime="2025-10-14 13:17:11.178961783 +0000 UTC m=+148.027396592" Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.188980 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k8lml" event={"ID":"b3fd5e11-b817-4e76-a744-09eefc35c83b","Type":"ContainerStarted","Data":"8a464d2d9355ec26c485d08ff5a895619bf1b94abb4608e25182000cd87d2847"} Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.189992 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k8lml" Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.209078 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9rnn7" event={"ID":"a7148d1c-3586-4dae-a72d-543940574d2e","Type":"ContainerStarted","Data":"9916f1822594810c7132cdb99eb92ca260afdf196b953e72276353a790b22dc8"} Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.239079 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-zjwcj" event={"ID":"87d34324-1bfd-47d6-8551-7bc545575a4a","Type":"ContainerStarted","Data":"7db619fbecfe6916b96c54d535c6d229c3eaa5fba0eb0f096e4e5d1f5f25532a"} Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.253534 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:11 crc kubenswrapper[4725]: E1014 13:17:11.254116 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:11.754082383 +0000 UTC m=+148.602517192 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.254392 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:11 crc kubenswrapper[4725]: E1014 13:17:11.256741 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:11.756731476 +0000 UTC m=+148.605166285 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.274620 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k8lml" podStartSLOduration=126.274601313 podStartE2EDuration="2m6.274601313s" podCreationTimestamp="2025-10-14 13:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:11.272977347 +0000 UTC m=+148.121412166" watchObservedRunningTime="2025-10-14 13:17:11.274601313 +0000 UTC m=+148.123036122" Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.275230 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6dvkx" podStartSLOduration=125.27522408 podStartE2EDuration="2m5.27522408s" podCreationTimestamp="2025-10-14 13:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:11.239357082 +0000 UTC m=+148.087791891" watchObservedRunningTime="2025-10-14 13:17:11.27522408 +0000 UTC m=+148.123658889" Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.289116 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dmz29" event={"ID":"98ec98f8-f0b2-4170-bb1b-49bf82c82c76","Type":"ContainerStarted","Data":"8164208335e3b5182b55ef7231d882ca451021a6387c3513943d9ee79187e0a8"} Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.295871 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29340795-6q4fq" podStartSLOduration=125.295849404 podStartE2EDuration="2m5.295849404s" podCreationTimestamp="2025-10-14 13:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:11.291077951 +0000 UTC m=+148.139512760" watchObservedRunningTime="2025-10-14 13:17:11.295849404 +0000 UTC m=+148.144284213" Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.300551 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f6klx" event={"ID":"d54b2ace-6596-4bcf-88cb-23f381105f80","Type":"ContainerStarted","Data":"21f45b2aaac633157d552ca32b0660896ec66d08568816e86e9e57cb3f3f64e8"} Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.301737 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f6klx" Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.324874 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9rnn7" podStartSLOduration=126.32482337 podStartE2EDuration="2m6.32482337s" podCreationTimestamp="2025-10-14 13:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:11.323267076 +0000 UTC m=+148.171701885" watchObservedRunningTime="2025-10-14 13:17:11.32482337 +0000 UTC m=+148.173258179" Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.325840 4725 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-f6klx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.326163 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f6klx" podUID="d54b2ace-6596-4bcf-88cb-23f381105f80" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.328655 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htmcq" event={"ID":"218135fe-157d-49e2-b391-acf0af7fdc3e","Type":"ContainerStarted","Data":"9cffec13f945a4c64e3c98bcee9d418be66b7aee08278594c05db440d03fb9ce"} Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.355695 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jnvzr" event={"ID":"b372f112-486d-4848-a78f-552a485abacc","Type":"ContainerStarted","Data":"4de848d63fd2e3146b198a1888ec705b074d38b9c2c1a5c98e8d2f180e41d439"} Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.359351 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:11 crc kubenswrapper[4725]: E1014 13:17:11.359707 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:11.859656208 +0000 UTC m=+148.708091027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.359925 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:11 crc kubenswrapper[4725]: E1014 13:17:11.360492 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:11.860466551 +0000 UTC m=+148.708901360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.368247 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f6klx" podStartSLOduration=125.368197495 podStartE2EDuration="2m5.368197495s" podCreationTimestamp="2025-10-14 13:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:11.360725848 +0000 UTC m=+148.209160657" watchObservedRunningTime="2025-10-14 13:17:11.368197495 +0000 UTC m=+148.216632304" Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.380931 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dbshk" event={"ID":"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9","Type":"ContainerStarted","Data":"7c0b62ef5c6aa141f16a2227f2a5f4dd9c305cc041cbc58c134e303c6b700b75"} Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.393852 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-zjwcj" podStartSLOduration=7.393825889 podStartE2EDuration="7.393825889s" podCreationTimestamp="2025-10-14 13:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:11.393336055 +0000 UTC m=+148.241770874" watchObservedRunningTime="2025-10-14 13:17:11.393825889 +0000 UTC m=+148.242260708" Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.407881 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mspcl" event={"ID":"4357984b-72f2-4c52-bae4-1dce4616b0df","Type":"ContainerStarted","Data":"96cd822842db00614feb1d17e6e70149b2edb0f4683604dcf2a64c154c29ab27"} Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.420419 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4g6pl" event={"ID":"d1a46958-489a-4357-adc6-ad2990dc19cd","Type":"ContainerStarted","Data":"ec5f52f0b2f990521e3d90975f21727e303bf04c2bed430c73036480acde7925"} Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.422046 4725 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-4g6pl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" start-of-body= Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.422094 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4g6pl" podUID="d1a46958-489a-4357-adc6-ad2990dc19cd" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.43:8443/healthz\": dial tcp 10.217.0.43:8443: connect: connection refused" Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.424318 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dmz29" podStartSLOduration=126.424271155 podStartE2EDuration="2m6.424271155s" podCreationTimestamp="2025-10-14 13:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:11.422203918 +0000 UTC m=+148.270638747" watchObservedRunningTime="2025-10-14 13:17:11.424271155 +0000 UTC m=+148.272705964" Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.466567 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htmcq" podStartSLOduration=126.46654265 podStartE2EDuration="2m6.46654265s" podCreationTimestamp="2025-10-14 13:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:11.465784409 +0000 UTC m=+148.314219218" watchObservedRunningTime="2025-10-14 13:17:11.46654265 +0000 UTC m=+148.314977469" Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.468222 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.469792 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hb586" event={"ID":"aba4ae84-aa5c-4790-a5cc-b2c865bae5dd","Type":"ContainerStarted","Data":"2237d670b3b2868ab6ba323e777265ab33059959b81c97d04b37e51584eec22f"} Oct 14 13:17:11 crc kubenswrapper[4725]: E1014 13:17:11.472944 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:11.972914338 +0000 UTC m=+148.821349147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.484930 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hb586" Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.499730 4725 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hb586 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.499830 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hb586" podUID="aba4ae84-aa5c-4790-a5cc-b2c865bae5dd" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.502672 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-2bfkn" event={"ID":"eb0a83ee-f088-424f-b3c6-8ac8e2a50a27","Type":"ContainerStarted","Data":"4b9304574f76a4bd9145b32ca0026b0c20fd9409ad4477999aaf3181e5548b50"} Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.514717 4725 generic.go:334] "Generic (PLEG): container finished" podID="42497b95-3bd9-480d-9393-db14108c977e" containerID="980ec7201a9d52a4b3e74d06b6d1802c8ec32701863cf5058bf2791503bbfe38" exitCode=0 Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.514839 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" event={"ID":"42497b95-3bd9-480d-9393-db14108c977e","Type":"ContainerDied","Data":"980ec7201a9d52a4b3e74d06b6d1802c8ec32701863cf5058bf2791503bbfe38"} Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.572654 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mspcl" podStartSLOduration=126.572627061 podStartE2EDuration="2m6.572627061s" podCreationTimestamp="2025-10-14 13:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:11.5208243 +0000 UTC m=+148.369259109" watchObservedRunningTime="2025-10-14 13:17:11.572627061 +0000 UTC m=+148.421061870" Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.575709 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:11 crc kubenswrapper[4725]: E1014 13:17:11.576731 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:12.076680894 +0000 UTC m=+148.925115893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.586941 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f7scx" event={"ID":"46c9f20f-915c-48c9-b079-ed159fa09d70","Type":"ContainerStarted","Data":"055eb5a6483a0b052110047480e03cb918ca07b627d3f6112fc0fceb2c0bbc4f"} Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.587057 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f7scx" event={"ID":"46c9f20f-915c-48c9-b079-ed159fa09d70","Type":"ContainerStarted","Data":"f0663ace6bce99818a297d2e848a6ba1c62e29833662ab11871638f673c35a73"} Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.634834 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-dbshk" podStartSLOduration=126.63480427 podStartE2EDuration="2m6.63480427s" podCreationTimestamp="2025-10-14 13:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:11.574354949 +0000 UTC m=+148.422789768" watchObservedRunningTime="2025-10-14 13:17:11.63480427 +0000 UTC m=+148.483239079" Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.635706 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6brk2" event={"ID":"42ee8d72-7daa-4835-b976-db8e34dfdb3c","Type":"ContainerStarted","Data":"cb14b0c731326c8489903c689be452d0c655a3a0212ec07c2bb3dbf7a587814c"} Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.674553 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4v6wb" event={"ID":"3fd3f30a-c1ac-4a8f-8d95-9d4165d608a2","Type":"ContainerStarted","Data":"2688fef18f3f3511510409ff1150e53d2c8a47087bc705592aeffea8f33e9fa7"} Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.705376 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-4v6wb" Oct 14 13:17:11 crc kubenswrapper[4725]: E1014 13:17:11.686777 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:12.186751335 +0000 UTC m=+149.035186144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.686478 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.707138 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.714058 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-96fdf" Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.717441 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-4v6wb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.727686 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4v6wb" podUID="3fd3f30a-c1ac-4a8f-8d95-9d4165d608a2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Oct 14 13:17:11 crc kubenswrapper[4725]: E1014 13:17:11.751673 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:12.251646979 +0000 UTC m=+149.100081788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.769102 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-dfhsw" Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.801392 4725 patch_prober.go:28] interesting pod/router-default-5444994796-twd2b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:17:11 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 14 13:17:11 crc kubenswrapper[4725]: [+]process-running ok Oct 14 13:17:11 crc kubenswrapper[4725]: healthz check failed Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.801491 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-twd2b" podUID="23ffd8bd-e6ca-4824-900e-08ba0cd80041" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.810544 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.816066 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-2bfkn" podStartSLOduration=126.81604653 podStartE2EDuration="2m6.81604653s" podCreationTimestamp="2025-10-14 13:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:11.814670322 +0000 UTC m=+148.663105141" watchObservedRunningTime="2025-10-14 13:17:11.81604653 +0000 UTC m=+148.664481339" Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.816170 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-hb586" podStartSLOduration=125.816166793 podStartE2EDuration="2m5.816166793s" podCreationTimestamp="2025-10-14 13:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:11.769690671 +0000 UTC m=+148.618125480" watchObservedRunningTime="2025-10-14 13:17:11.816166793 +0000 UTC m=+148.664601602" Oct 14 13:17:11 crc kubenswrapper[4725]: E1014 13:17:11.823642 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:12.3236 +0000 UTC m=+149.172034809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:11 crc kubenswrapper[4725]: I1014 13:17:11.927471 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:11 crc kubenswrapper[4725]: E1014 13:17:11.927947 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:12.427921431 +0000 UTC m=+149.276356230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.060085 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:12 crc kubenswrapper[4725]: E1014 13:17:12.060583 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:12.560560381 +0000 UTC m=+149.408995190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.158141 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6brk2" podStartSLOduration=127.158116023 podStartE2EDuration="2m7.158116023s" podCreationTimestamp="2025-10-14 13:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:12.090742209 +0000 UTC m=+148.939177018" watchObservedRunningTime="2025-10-14 13:17:12.158116023 +0000 UTC m=+149.006550832" Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.162859 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:12 crc kubenswrapper[4725]: E1014 13:17:12.163258 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:12.663243236 +0000 UTC m=+149.511678045 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.197840 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-4v6wb" podStartSLOduration=127.197818668 podStartE2EDuration="2m7.197818668s" podCreationTimestamp="2025-10-14 13:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:12.158256047 +0000 UTC m=+149.006690856" watchObservedRunningTime="2025-10-14 13:17:12.197818668 +0000 UTC m=+149.046253477" Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.202626 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f7scx" podStartSLOduration=126.20260801 podStartE2EDuration="2m6.20260801s" podCreationTimestamp="2025-10-14 13:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:12.183043706 +0000 UTC m=+149.031478515" watchObservedRunningTime="2025-10-14 13:17:12.20260801 +0000 UTC m=+149.051042819" Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.264846 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:12 crc kubenswrapper[4725]: E1014 13:17:12.265977 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:12.765948722 +0000 UTC m=+149.614383531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.367918 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:12 crc kubenswrapper[4725]: E1014 13:17:12.368376 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:12.86835985 +0000 UTC m=+149.716794659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.384470 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.469219 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:12 crc kubenswrapper[4725]: E1014 13:17:12.469679 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:12.969655197 +0000 UTC m=+149.818090006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.570887 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:12 crc kubenswrapper[4725]: E1014 13:17:12.571364 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:13.071340995 +0000 UTC m=+149.919775804 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.671970 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:12 crc kubenswrapper[4725]: E1014 13:17:12.672223 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:13.172182709 +0000 UTC m=+150.020617548 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.672818 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:12 crc kubenswrapper[4725]: E1014 13:17:12.673397 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:13.173359282 +0000 UTC m=+150.021794091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.679309 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-f7scx" event={"ID":"46c9f20f-915c-48c9-b079-ed159fa09d70","Type":"ContainerStarted","Data":"2a59396e65ac6f943992afaf7841235ed606833bc64e65075d695cd7c436b1ab"} Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.682507 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w98cd" event={"ID":"29bc7b45-9968-48c5-be01-2c8b7f39df13","Type":"ContainerStarted","Data":"4075d9680fcade2fe0d741065a59da21fcae90cb265630d0212cd8a3d1cfaa43"} Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.689796 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9mz9" event={"ID":"64be0777-3e55-42b0-8832-8c58f1980f27","Type":"ContainerStarted","Data":"8925d8136a7901e40b25af28eae8b40a994045d0c4943aad0901b5609f6ab557"} Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.689894 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9mz9" Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.694541 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" event={"ID":"9a1f4665-bd0e-4e79-948e-1c1894945013","Type":"ContainerStarted","Data":"e64e35c5fa91b7891563c094de80e807b660943378850cf8b8f2e2f3be937952"} Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.698184 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9xkt" event={"ID":"4ececce9-bde7-4d8c-b4b8-75b4a8538c04","Type":"ContainerStarted","Data":"c9e1da7d238e2ee7f550b685a8b2030dad06eb4b9283cbf4b46c4d246d020bbf"} Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.700786 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" event={"ID":"42497b95-3bd9-480d-9393-db14108c977e","Type":"ContainerStarted","Data":"ddf0b76b29fce728dfef669e7ba59065abb14722d9b304071c853e5ccff1b298"} Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.703227 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dmz29" event={"ID":"98ec98f8-f0b2-4170-bb1b-49bf82c82c76","Type":"ContainerStarted","Data":"3f43a0ba734a6063ebe69802fe7af21c4e08559548f68ac02a17712af51468c8"} Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.705551 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-s94lq" event={"ID":"4432b354-68b6-461c-98f9-11651a4ec51a","Type":"ContainerStarted","Data":"c31616b5449c5e44d3106430c786c4b71f0d48c34b65009c9f88965aec8e3268"} Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.708318 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8b79b" event={"ID":"e1e76f4a-94bf-473d-9658-be90b7f79e56","Type":"ContainerStarted","Data":"d34b8b71f2f1d82511bd845dd6268164bdd74d46d8e857a337d60caa6975437f"} Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.712581 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zbjgv" event={"ID":"77bd2504-9807-458c-80c3-a61e41bfcef2","Type":"ContainerStarted","Data":"7615b6f8e7f1a8a232c14115ff84cdf0dc4f7d7d900a5adc9d3afe03cee2ac16"} Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.712630 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zbjgv" event={"ID":"77bd2504-9807-458c-80c3-a61e41bfcef2","Type":"ContainerStarted","Data":"6c9f61e1a050d4f933f38b428cef6fcffba02fae9438212eebbcdd978ee9aa40"} Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.712784 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-zbjgv" Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.716107 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-htmcq" event={"ID":"218135fe-157d-49e2-b391-acf0af7fdc3e","Type":"ContainerStarted","Data":"407e77b5e3e7277f2948e75c80a6fca75f662e6014799ac913ea587db4d7ea56"} Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.717908 4725 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-f6klx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.717945 4725 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hb586 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.717980 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f6klx" podUID="d54b2ace-6596-4bcf-88cb-23f381105f80" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.718003 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hb586" podUID="aba4ae84-aa5c-4790-a5cc-b2c865bae5dd" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.718302 4725 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-ftgzg container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" start-of-body= Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.718416 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftgzg" podUID="490a96f8-3a20-414a-b664-c2df9a8d373f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.21:5443/healthz\": dial tcp 10.217.0.21:5443: connect: connection refused" Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.718429 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-4v6wb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.718471 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4v6wb" podUID="3fd3f30a-c1ac-4a8f-8d95-9d4165d608a2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.749108 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4g6pl" Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.775787 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.775812 4725 patch_prober.go:28] interesting pod/router-default-5444994796-twd2b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:17:12 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 14 13:17:12 crc kubenswrapper[4725]: [+]process-running ok Oct 14 13:17:12 crc kubenswrapper[4725]: healthz check failed Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.775874 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-twd2b" podUID="23ffd8bd-e6ca-4824-900e-08ba0cd80041" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.776929 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w98cd" podStartSLOduration=127.776902571 podStartE2EDuration="2m7.776902571s" podCreationTimestamp="2025-10-14 13:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:12.725894893 +0000 UTC m=+149.574329722" watchObservedRunningTime="2025-10-14 13:17:12.776902571 +0000 UTC m=+149.625337380" Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.777132 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-s94lq" podStartSLOduration=126.777128267 podStartE2EDuration="2m6.777128267s" podCreationTimestamp="2025-10-14 13:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:12.776104039 +0000 UTC m=+149.624538848" watchObservedRunningTime="2025-10-14 13:17:12.777128267 +0000 UTC m=+149.625563076" Oct 14 13:17:12 crc kubenswrapper[4725]: E1014 13:17:12.777688 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:13.277660732 +0000 UTC m=+150.126095541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.906170 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:12 crc kubenswrapper[4725]: E1014 13:17:12.906656 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:13.40663974 +0000 UTC m=+150.255074549 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:12 crc kubenswrapper[4725]: I1014 13:17:12.912328 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9mz9" podStartSLOduration=126.912302567 podStartE2EDuration="2m6.912302567s" podCreationTimestamp="2025-10-14 13:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:12.866832092 +0000 UTC m=+149.715266921" watchObservedRunningTime="2025-10-14 13:17:12.912302567 +0000 UTC m=+149.760737386" Oct 14 13:17:13 crc kubenswrapper[4725]: I1014 13:17:13.005310 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" podStartSLOduration=127.005282573 podStartE2EDuration="2m7.005282573s" podCreationTimestamp="2025-10-14 13:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:12.951866547 +0000 UTC m=+149.800301376" watchObservedRunningTime="2025-10-14 13:17:13.005282573 +0000 UTC m=+149.853717392" Oct 14 13:17:13 crc kubenswrapper[4725]: I1014 13:17:13.008086 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:13 crc kubenswrapper[4725]: E1014 13:17:13.008171 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:13.508148302 +0000 UTC m=+150.356583111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:13 crc kubenswrapper[4725]: I1014 13:17:13.025678 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:13 crc kubenswrapper[4725]: E1014 13:17:13.026114 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:13.526095591 +0000 UTC m=+150.374530400 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:13 crc kubenswrapper[4725]: I1014 13:17:13.119383 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-8b79b" podStartSLOduration=128.119358845 podStartE2EDuration="2m8.119358845s" podCreationTimestamp="2025-10-14 13:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:13.026300007 +0000 UTC m=+149.874734816" watchObservedRunningTime="2025-10-14 13:17:13.119358845 +0000 UTC m=+149.967793654" Oct 14 13:17:13 crc kubenswrapper[4725]: I1014 13:17:13.129424 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:13 crc kubenswrapper[4725]: E1014 13:17:13.129834 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:13.629814376 +0000 UTC m=+150.478249185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:13 crc kubenswrapper[4725]: I1014 13:17:13.212213 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-zbjgv" podStartSLOduration=9.212190137 podStartE2EDuration="9.212190137s" podCreationTimestamp="2025-10-14 13:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:13.12887285 +0000 UTC m=+149.977307679" watchObservedRunningTime="2025-10-14 13:17:13.212190137 +0000 UTC m=+150.060624936" Oct 14 13:17:13 crc kubenswrapper[4725]: I1014 13:17:13.231569 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:13 crc kubenswrapper[4725]: E1014 13:17:13.232031 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:13.732015088 +0000 UTC m=+150.580449897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:13 crc kubenswrapper[4725]: I1014 13:17:13.334100 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9xkt" podStartSLOduration=127.334072416 podStartE2EDuration="2m7.334072416s" podCreationTimestamp="2025-10-14 13:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:13.214888992 +0000 UTC m=+150.063323811" watchObservedRunningTime="2025-10-14 13:17:13.334072416 +0000 UTC m=+150.182507265" Oct 14 13:17:13 crc kubenswrapper[4725]: I1014 13:17:13.335250 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:13 crc kubenswrapper[4725]: E1014 13:17:13.335831 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:13.835806715 +0000 UTC m=+150.684241534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:13 crc kubenswrapper[4725]: I1014 13:17:13.437377 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:13 crc kubenswrapper[4725]: E1014 13:17:13.437987 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:13.937958415 +0000 UTC m=+150.786393404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:13 crc kubenswrapper[4725]: I1014 13:17:13.538264 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:13 crc kubenswrapper[4725]: E1014 13:17:13.538473 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:14.038421239 +0000 UTC m=+150.886856048 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:13 crc kubenswrapper[4725]: I1014 13:17:13.539046 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:13 crc kubenswrapper[4725]: E1014 13:17:13.539404 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:14.039389376 +0000 UTC m=+150.887824185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:13 crc kubenswrapper[4725]: I1014 13:17:13.640050 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:13 crc kubenswrapper[4725]: E1014 13:17:13.640279 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:14.1402423 +0000 UTC m=+150.988677109 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:13 crc kubenswrapper[4725]: I1014 13:17:13.640379 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:13 crc kubenswrapper[4725]: E1014 13:17:13.640818 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:14.140808537 +0000 UTC m=+150.989243346 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:13 crc kubenswrapper[4725]: I1014 13:17:13.723977 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jnvzr" event={"ID":"b372f112-486d-4848-a78f-552a485abacc","Type":"ContainerStarted","Data":"3cb371b2baaf59f18be4621b2892e33b344bca3a072377841025fb947cdeb432"} Oct 14 13:17:13 crc kubenswrapper[4725]: I1014 13:17:13.728162 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" event={"ID":"42497b95-3bd9-480d-9393-db14108c977e","Type":"ContainerStarted","Data":"0bcf520deefea09db8e9a2c0c5db43707225cf16cc1df0eb3bca3777cb9ba558"} Oct 14 13:17:13 crc kubenswrapper[4725]: I1014 13:17:13.729528 4725 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hb586 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Oct 14 13:17:13 crc kubenswrapper[4725]: I1014 13:17:13.729602 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hb586" podUID="aba4ae84-aa5c-4790-a5cc-b2c865bae5dd" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Oct 14 13:17:13 crc kubenswrapper[4725]: I1014 13:17:13.742034 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:13 crc kubenswrapper[4725]: E1014 13:17:13.742292 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:14.242250918 +0000 UTC m=+151.090685737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:13 crc kubenswrapper[4725]: I1014 13:17:13.742480 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:13 crc kubenswrapper[4725]: E1014 13:17:13.742917 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:14.242906196 +0000 UTC m=+151.091341005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:13 crc kubenswrapper[4725]: I1014 13:17:13.757959 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-f6klx" Oct 14 13:17:13 crc kubenswrapper[4725]: I1014 13:17:13.766525 4725 patch_prober.go:28] interesting pod/router-default-5444994796-twd2b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:17:13 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 14 13:17:13 crc kubenswrapper[4725]: [+]process-running ok Oct 14 13:17:13 crc kubenswrapper[4725]: healthz check failed Oct 14 13:17:13 crc kubenswrapper[4725]: I1014 13:17:13.766633 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-twd2b" podUID="23ffd8bd-e6ca-4824-900e-08ba0cd80041" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:17:13 crc kubenswrapper[4725]: I1014 13:17:13.802186 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" podStartSLOduration=128.802158203 podStartE2EDuration="2m8.802158203s" podCreationTimestamp="2025-10-14 13:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:13.799592192 +0000 UTC m=+150.648027001" watchObservedRunningTime="2025-10-14 13:17:13.802158203 +0000 UTC m=+150.650593012" Oct 14 13:17:13 crc kubenswrapper[4725]: I1014 13:17:13.843849 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:13 crc kubenswrapper[4725]: E1014 13:17:13.845495 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:14.345438947 +0000 UTC m=+151.193873946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:13 crc kubenswrapper[4725]: I1014 13:17:13.946277 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:13 crc kubenswrapper[4725]: E1014 13:17:13.946822 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:14.446803656 +0000 UTC m=+151.295238465 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:14 crc kubenswrapper[4725]: I1014 13:17:14.047162 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:14 crc kubenswrapper[4725]: E1014 13:17:14.047361 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:14.547314772 +0000 UTC m=+151.395749601 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:14 crc kubenswrapper[4725]: I1014 13:17:14.047523 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:17:14 crc kubenswrapper[4725]: I1014 13:17:14.047567 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:17:14 crc kubenswrapper[4725]: I1014 13:17:14.047622 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:17:14 crc kubenswrapper[4725]: I1014 13:17:14.047640 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:17:14 crc kubenswrapper[4725]: I1014 13:17:14.047678 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:14 crc kubenswrapper[4725]: E1014 13:17:14.048041 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:14.548025821 +0000 UTC m=+151.396460630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:14 crc kubenswrapper[4725]: I1014 13:17:14.049727 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:17:14 crc kubenswrapper[4725]: I1014 13:17:14.080822 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:17:14 crc kubenswrapper[4725]: I1014 13:17:14.080849 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:17:14 crc kubenswrapper[4725]: I1014 13:17:14.081313 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 14 13:17:14 crc kubenswrapper[4725]: I1014 13:17:14.082473 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:17:14 crc kubenswrapper[4725]: I1014 13:17:14.149193 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:14 crc kubenswrapper[4725]: E1014 13:17:14.150276 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:14.650247853 +0000 UTC m=+151.498682672 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:14 crc kubenswrapper[4725]: I1014 13:17:14.252499 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:14 crc kubenswrapper[4725]: E1014 13:17:14.253225 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:14.753202687 +0000 UTC m=+151.601637506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:14 crc kubenswrapper[4725]: I1014 13:17:14.350381 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 14 13:17:14 crc kubenswrapper[4725]: I1014 13:17:14.357268 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:14 crc kubenswrapper[4725]: E1014 13:17:14.357908 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:14.857881148 +0000 UTC m=+151.706315957 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:14 crc kubenswrapper[4725]: I1014 13:17:14.361744 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:17:14 crc kubenswrapper[4725]: I1014 13:17:14.459492 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:14 crc kubenswrapper[4725]: E1014 13:17:14.460002 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:14.959980487 +0000 UTC m=+151.808415296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:14 crc kubenswrapper[4725]: I1014 13:17:14.561275 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:14 crc kubenswrapper[4725]: E1014 13:17:14.561917 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:15.061891731 +0000 UTC m=+151.910326540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:14 crc kubenswrapper[4725]: I1014 13:17:14.670671 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:14 crc kubenswrapper[4725]: E1014 13:17:14.671628 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:15.171609993 +0000 UTC m=+152.020044802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:14 crc kubenswrapper[4725]: I1014 13:17:14.728692 4725 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-ftgzg container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 14 13:17:14 crc kubenswrapper[4725]: I1014 13:17:14.728760 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftgzg" podUID="490a96f8-3a20-414a-b664-c2df9a8d373f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.21:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 14 13:17:14 crc kubenswrapper[4725]: I1014 13:17:14.773678 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:14 crc kubenswrapper[4725]: I1014 13:17:14.773747 4725 patch_prober.go:28] interesting pod/router-default-5444994796-twd2b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:17:14 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 14 13:17:14 crc kubenswrapper[4725]: [+]process-running ok Oct 14 13:17:14 crc kubenswrapper[4725]: healthz check failed Oct 14 13:17:14 crc kubenswrapper[4725]: I1014 13:17:14.773815 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-twd2b" podUID="23ffd8bd-e6ca-4824-900e-08ba0cd80041" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:17:14 crc kubenswrapper[4725]: E1014 13:17:14.774251 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:15.274221426 +0000 UTC m=+152.122656235 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:14 crc kubenswrapper[4725]: I1014 13:17:14.785295 4725 generic.go:334] "Generic (PLEG): container finished" podID="9d4f5074-61bb-4e5b-91ce-d8149ddb50f1" containerID="181f5706bdea2990a91001901bbec7f8d3461b974f9ffee1adfde16ddb54a858" exitCode=0 Oct 14 13:17:14 crc kubenswrapper[4725]: I1014 13:17:14.786709 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340795-6q4fq" event={"ID":"9d4f5074-61bb-4e5b-91ce-d8149ddb50f1","Type":"ContainerDied","Data":"181f5706bdea2990a91001901bbec7f8d3461b974f9ffee1adfde16ddb54a858"} Oct 14 13:17:14 crc kubenswrapper[4725]: I1014 13:17:14.875393 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:14 crc kubenswrapper[4725]: E1014 13:17:14.876040 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:15.376019378 +0000 UTC m=+152.224454187 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:14 crc kubenswrapper[4725]: I1014 13:17:14.962490 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6fsmm"] Oct 14 13:17:14 crc kubenswrapper[4725]: I1014 13:17:14.963965 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6fsmm" Oct 14 13:17:14 crc kubenswrapper[4725]: I1014 13:17:14.981006 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:14 crc kubenswrapper[4725]: E1014 13:17:14.982962 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:15.482934421 +0000 UTC m=+152.331369230 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.001749 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.024268 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6fsmm"] Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.087373 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57b7ca76-c75e-44f5-b428-6233a92bca51-catalog-content\") pod \"certified-operators-6fsmm\" (UID: \"57b7ca76-c75e-44f5-b428-6233a92bca51\") " pod="openshift-marketplace/certified-operators-6fsmm" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.087433 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57b7ca76-c75e-44f5-b428-6233a92bca51-utilities\") pod \"certified-operators-6fsmm\" (UID: \"57b7ca76-c75e-44f5-b428-6233a92bca51\") " pod="openshift-marketplace/certified-operators-6fsmm" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.087504 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnxlj\" (UniqueName: \"kubernetes.io/projected/57b7ca76-c75e-44f5-b428-6233a92bca51-kube-api-access-jnxlj\") pod \"certified-operators-6fsmm\" (UID: \"57b7ca76-c75e-44f5-b428-6233a92bca51\") " pod="openshift-marketplace/certified-operators-6fsmm" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.087567 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:15 crc kubenswrapper[4725]: E1014 13:17:15.087943 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:15.58792824 +0000 UTC m=+152.436363039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.176840 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fwjbz"] Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.178000 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fwjbz" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.189273 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.190382 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.190507 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57b7ca76-c75e-44f5-b428-6233a92bca51-utilities\") pod \"certified-operators-6fsmm\" (UID: \"57b7ca76-c75e-44f5-b428-6233a92bca51\") " pod="openshift-marketplace/certified-operators-6fsmm" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.190529 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnxlj\" (UniqueName: \"kubernetes.io/projected/57b7ca76-c75e-44f5-b428-6233a92bca51-kube-api-access-jnxlj\") pod \"certified-operators-6fsmm\" (UID: \"57b7ca76-c75e-44f5-b428-6233a92bca51\") " pod="openshift-marketplace/certified-operators-6fsmm" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.190556 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4c5e303-50a7-4c4f-835f-651e69aba358-catalog-content\") pod \"community-operators-fwjbz\" (UID: \"b4c5e303-50a7-4c4f-835f-651e69aba358\") " pod="openshift-marketplace/community-operators-fwjbz" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.190599 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4c5e303-50a7-4c4f-835f-651e69aba358-utilities\") pod \"community-operators-fwjbz\" (UID: \"b4c5e303-50a7-4c4f-835f-651e69aba358\") " pod="openshift-marketplace/community-operators-fwjbz" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.190622 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcljq\" (UniqueName: \"kubernetes.io/projected/b4c5e303-50a7-4c4f-835f-651e69aba358-kube-api-access-vcljq\") pod \"community-operators-fwjbz\" (UID: \"b4c5e303-50a7-4c4f-835f-651e69aba358\") " pod="openshift-marketplace/community-operators-fwjbz" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.190694 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57b7ca76-c75e-44f5-b428-6233a92bca51-catalog-content\") pod \"certified-operators-6fsmm\" (UID: \"57b7ca76-c75e-44f5-b428-6233a92bca51\") " pod="openshift-marketplace/certified-operators-6fsmm" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.191273 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57b7ca76-c75e-44f5-b428-6233a92bca51-catalog-content\") pod \"certified-operators-6fsmm\" (UID: \"57b7ca76-c75e-44f5-b428-6233a92bca51\") " pod="openshift-marketplace/certified-operators-6fsmm" Oct 14 13:17:15 crc kubenswrapper[4725]: E1014 13:17:15.191356 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:15.691338996 +0000 UTC m=+152.539773805 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.192238 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57b7ca76-c75e-44f5-b428-6233a92bca51-utilities\") pod \"certified-operators-6fsmm\" (UID: \"57b7ca76-c75e-44f5-b428-6233a92bca51\") " pod="openshift-marketplace/certified-operators-6fsmm" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.258027 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnxlj\" (UniqueName: \"kubernetes.io/projected/57b7ca76-c75e-44f5-b428-6233a92bca51-kube-api-access-jnxlj\") pod \"certified-operators-6fsmm\" (UID: \"57b7ca76-c75e-44f5-b428-6233a92bca51\") " pod="openshift-marketplace/certified-operators-6fsmm" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.273921 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fwjbz"] Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.292479 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4c5e303-50a7-4c4f-835f-651e69aba358-utilities\") pod \"community-operators-fwjbz\" (UID: \"b4c5e303-50a7-4c4f-835f-651e69aba358\") " pod="openshift-marketplace/community-operators-fwjbz" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.292533 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcljq\" (UniqueName: \"kubernetes.io/projected/b4c5e303-50a7-4c4f-835f-651e69aba358-kube-api-access-vcljq\") pod \"community-operators-fwjbz\" (UID: \"b4c5e303-50a7-4c4f-835f-651e69aba358\") " pod="openshift-marketplace/community-operators-fwjbz" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.292569 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.292639 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4c5e303-50a7-4c4f-835f-651e69aba358-catalog-content\") pod \"community-operators-fwjbz\" (UID: \"b4c5e303-50a7-4c4f-835f-651e69aba358\") " pod="openshift-marketplace/community-operators-fwjbz" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.293175 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4c5e303-50a7-4c4f-835f-651e69aba358-catalog-content\") pod \"community-operators-fwjbz\" (UID: \"b4c5e303-50a7-4c4f-835f-651e69aba358\") " pod="openshift-marketplace/community-operators-fwjbz" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.294435 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4c5e303-50a7-4c4f-835f-651e69aba358-utilities\") pod \"community-operators-fwjbz\" (UID: \"b4c5e303-50a7-4c4f-835f-651e69aba358\") " pod="openshift-marketplace/community-operators-fwjbz" Oct 14 13:17:15 crc kubenswrapper[4725]: E1014 13:17:15.300304 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:15.800283606 +0000 UTC m=+152.648718415 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.310320 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6fsmm" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.391160 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcljq\" (UniqueName: \"kubernetes.io/projected/b4c5e303-50a7-4c4f-835f-651e69aba358-kube-api-access-vcljq\") pod \"community-operators-fwjbz\" (UID: \"b4c5e303-50a7-4c4f-835f-651e69aba358\") " pod="openshift-marketplace/community-operators-fwjbz" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.394191 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:15 crc kubenswrapper[4725]: E1014 13:17:15.394616 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:15.894594209 +0000 UTC m=+152.743029018 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.394647 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lj4wj"] Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.395916 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lj4wj" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.457548 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lj4wj"] Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.500631 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70da4b97-6b5f-4aae-93d3-ef2593f042c0-catalog-content\") pod \"certified-operators-lj4wj\" (UID: \"70da4b97-6b5f-4aae-93d3-ef2593f042c0\") " pod="openshift-marketplace/certified-operators-lj4wj" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.500757 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.500781 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g5cs\" (UniqueName: \"kubernetes.io/projected/70da4b97-6b5f-4aae-93d3-ef2593f042c0-kube-api-access-5g5cs\") pod \"certified-operators-lj4wj\" (UID: \"70da4b97-6b5f-4aae-93d3-ef2593f042c0\") " pod="openshift-marketplace/certified-operators-lj4wj" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.500813 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70da4b97-6b5f-4aae-93d3-ef2593f042c0-utilities\") pod \"certified-operators-lj4wj\" (UID: \"70da4b97-6b5f-4aae-93d3-ef2593f042c0\") " pod="openshift-marketplace/certified-operators-lj4wj" Oct 14 13:17:15 crc kubenswrapper[4725]: E1014 13:17:15.501204 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:16.001189683 +0000 UTC m=+152.849624492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.523200 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fwjbz" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.600600 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xs2wz"] Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.602044 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.602439 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g5cs\" (UniqueName: \"kubernetes.io/projected/70da4b97-6b5f-4aae-93d3-ef2593f042c0-kube-api-access-5g5cs\") pod \"certified-operators-lj4wj\" (UID: \"70da4b97-6b5f-4aae-93d3-ef2593f042c0\") " pod="openshift-marketplace/certified-operators-lj4wj" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.602627 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70da4b97-6b5f-4aae-93d3-ef2593f042c0-utilities\") pod \"certified-operators-lj4wj\" (UID: \"70da4b97-6b5f-4aae-93d3-ef2593f042c0\") " pod="openshift-marketplace/certified-operators-lj4wj" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.602895 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70da4b97-6b5f-4aae-93d3-ef2593f042c0-utilities\") pod \"certified-operators-lj4wj\" (UID: \"70da4b97-6b5f-4aae-93d3-ef2593f042c0\") " pod="openshift-marketplace/certified-operators-lj4wj" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.602985 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70da4b97-6b5f-4aae-93d3-ef2593f042c0-catalog-content\") pod \"certified-operators-lj4wj\" (UID: \"70da4b97-6b5f-4aae-93d3-ef2593f042c0\") " pod="openshift-marketplace/certified-operators-lj4wj" Oct 14 13:17:15 crc kubenswrapper[4725]: E1014 13:17:15.603078 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:16.103052106 +0000 UTC m=+152.951486915 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.603361 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70da4b97-6b5f-4aae-93d3-ef2593f042c0-catalog-content\") pod \"certified-operators-lj4wj\" (UID: \"70da4b97-6b5f-4aae-93d3-ef2593f042c0\") " pod="openshift-marketplace/certified-operators-lj4wj" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.605569 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xs2wz" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.681963 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xs2wz"] Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.705496 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4706b0b-daed-451d-814b-bd2aa92c3c11-utilities\") pod \"community-operators-xs2wz\" (UID: \"a4706b0b-daed-451d-814b-bd2aa92c3c11\") " pod="openshift-marketplace/community-operators-xs2wz" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.705582 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4706b0b-daed-451d-814b-bd2aa92c3c11-catalog-content\") pod \"community-operators-xs2wz\" (UID: \"a4706b0b-daed-451d-814b-bd2aa92c3c11\") " pod="openshift-marketplace/community-operators-xs2wz" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.705605 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbwz9\" (UniqueName: \"kubernetes.io/projected/a4706b0b-daed-451d-814b-bd2aa92c3c11-kube-api-access-gbwz9\") pod \"community-operators-xs2wz\" (UID: \"a4706b0b-daed-451d-814b-bd2aa92c3c11\") " pod="openshift-marketplace/community-operators-xs2wz" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.705632 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:15 crc kubenswrapper[4725]: E1014 13:17:15.705998 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:16.205984099 +0000 UTC m=+153.054418908 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.711017 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g5cs\" (UniqueName: \"kubernetes.io/projected/70da4b97-6b5f-4aae-93d3-ef2593f042c0-kube-api-access-5g5cs\") pod \"certified-operators-lj4wj\" (UID: \"70da4b97-6b5f-4aae-93d3-ef2593f042c0\") " pod="openshift-marketplace/certified-operators-lj4wj" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.781407 4725 patch_prober.go:28] interesting pod/router-default-5444994796-twd2b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:17:15 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 14 13:17:15 crc kubenswrapper[4725]: [+]process-running ok Oct 14 13:17:15 crc kubenswrapper[4725]: healthz check failed Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.781488 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-twd2b" podUID="23ffd8bd-e6ca-4824-900e-08ba0cd80041" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.807199 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.807527 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4706b0b-daed-451d-814b-bd2aa92c3c11-catalog-content\") pod \"community-operators-xs2wz\" (UID: \"a4706b0b-daed-451d-814b-bd2aa92c3c11\") " pod="openshift-marketplace/community-operators-xs2wz" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.807560 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbwz9\" (UniqueName: \"kubernetes.io/projected/a4706b0b-daed-451d-814b-bd2aa92c3c11-kube-api-access-gbwz9\") pod \"community-operators-xs2wz\" (UID: \"a4706b0b-daed-451d-814b-bd2aa92c3c11\") " pod="openshift-marketplace/community-operators-xs2wz" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.807637 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4706b0b-daed-451d-814b-bd2aa92c3c11-utilities\") pod \"community-operators-xs2wz\" (UID: \"a4706b0b-daed-451d-814b-bd2aa92c3c11\") " pod="openshift-marketplace/community-operators-xs2wz" Oct 14 13:17:15 crc kubenswrapper[4725]: E1014 13:17:15.811481 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:16.31142656 +0000 UTC m=+153.159861369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.824599 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lj4wj" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.825752 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4706b0b-daed-451d-814b-bd2aa92c3c11-catalog-content\") pod \"community-operators-xs2wz\" (UID: \"a4706b0b-daed-451d-814b-bd2aa92c3c11\") " pod="openshift-marketplace/community-operators-xs2wz" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.825839 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e623abcc2d6e25aa3a0ffa16635f9ba9a13f84bc24bce6cf8e31f37d59014414"} Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.834074 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4706b0b-daed-451d-814b-bd2aa92c3c11-utilities\") pod \"community-operators-xs2wz\" (UID: \"a4706b0b-daed-451d-814b-bd2aa92c3c11\") " pod="openshift-marketplace/community-operators-xs2wz" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.864424 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbwz9\" (UniqueName: \"kubernetes.io/projected/a4706b0b-daed-451d-814b-bd2aa92c3c11-kube-api-access-gbwz9\") pod \"community-operators-xs2wz\" (UID: \"a4706b0b-daed-451d-814b-bd2aa92c3c11\") " pod="openshift-marketplace/community-operators-xs2wz" Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.875059 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"35e08d9ff54795d88bea1b819775dd19f1620ed784de02a6ececf83b107badb3"} Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.875151 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"d5bd58ebbb78defa61097889bf12f5b1452cbf747d92a0f3ccf33642e8f4f839"} Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.909406 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:15 crc kubenswrapper[4725]: E1014 13:17:15.910039 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:16.410022263 +0000 UTC m=+153.258457072 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.914609 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"74306c52766f3919284db31f5007bf713114d98d24bd46430430138555e0b5e2"} Oct 14 13:17:15 crc kubenswrapper[4725]: I1014 13:17:15.923896 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xs2wz" Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.019244 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:16 crc kubenswrapper[4725]: E1014 13:17:16.020962 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:16.520930527 +0000 UTC m=+153.369365336 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.099427 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6fsmm"] Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.103055 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k8lml" Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.123579 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:16 crc kubenswrapper[4725]: E1014 13:17:16.124082 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:16.624060155 +0000 UTC m=+153.472494984 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.226611 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:16 crc kubenswrapper[4725]: E1014 13:17:16.227134 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:16.72707056 +0000 UTC m=+153.575505379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.330844 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:16 crc kubenswrapper[4725]: E1014 13:17:16.331272 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:16.831257687 +0000 UTC m=+153.679692486 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.349034 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.350206 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.387350 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.413292 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.427485 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.453162 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:16 crc kubenswrapper[4725]: E1014 13:17:16.454777 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:16.954732441 +0000 UTC m=+153.803167240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.462470 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b02b86e-fa0a-4465-9613-44ab7d201daf-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8b02b86e-fa0a-4465-9613-44ab7d201daf\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.462642 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.462889 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b02b86e-fa0a-4465-9613-44ab7d201daf-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8b02b86e-fa0a-4465-9613-44ab7d201daf\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 13:17:16 crc kubenswrapper[4725]: E1014 13:17:16.464200 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:16.964119531 +0000 UTC m=+153.812554340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.568387 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.569007 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b02b86e-fa0a-4465-9613-44ab7d201daf-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8b02b86e-fa0a-4465-9613-44ab7d201daf\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.569167 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b02b86e-fa0a-4465-9613-44ab7d201daf-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8b02b86e-fa0a-4465-9613-44ab7d201daf\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.569381 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b02b86e-fa0a-4465-9613-44ab7d201daf-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8b02b86e-fa0a-4465-9613-44ab7d201daf\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 13:17:16 crc kubenswrapper[4725]: E1014 13:17:16.569393 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:17.069353438 +0000 UTC m=+153.917788247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.608256 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b02b86e-fa0a-4465-9613-44ab7d201daf-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8b02b86e-fa0a-4465-9613-44ab7d201daf\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.674378 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:16 crc kubenswrapper[4725]: E1014 13:17:16.674868 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:17.174852133 +0000 UTC m=+154.023286942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.741480 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.769611 4725 patch_prober.go:28] interesting pod/router-default-5444994796-twd2b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:17:16 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 14 13:17:16 crc kubenswrapper[4725]: [+]process-running ok Oct 14 13:17:16 crc kubenswrapper[4725]: healthz check failed Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.769671 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-twd2b" podUID="23ffd8bd-e6ca-4824-900e-08ba0cd80041" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.776305 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:16 crc kubenswrapper[4725]: E1014 13:17:16.776809 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:17.276784917 +0000 UTC m=+154.125219726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.815262 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xs2wz"] Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.852664 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340795-6q4fq" Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.882006 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:16 crc kubenswrapper[4725]: E1014 13:17:16.882569 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:17.382551238 +0000 UTC m=+154.230986047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.967416 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g9pj7"] Oct 14 13:17:16 crc kubenswrapper[4725]: E1014 13:17:16.967728 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d4f5074-61bb-4e5b-91ce-d8149ddb50f1" containerName="collect-profiles" Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.967742 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d4f5074-61bb-4e5b-91ce-d8149ddb50f1" containerName="collect-profiles" Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.967894 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d4f5074-61bb-4e5b-91ce-d8149ddb50f1" containerName="collect-profiles" Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.968764 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g9pj7" Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.978376 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.984089 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d4f5074-61bb-4e5b-91ce-d8149ddb50f1-secret-volume\") pod \"9d4f5074-61bb-4e5b-91ce-d8149ddb50f1\" (UID: \"9d4f5074-61bb-4e5b-91ce-d8149ddb50f1\") " Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.984225 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.984289 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfqsb\" (UniqueName: \"kubernetes.io/projected/9d4f5074-61bb-4e5b-91ce-d8149ddb50f1-kube-api-access-wfqsb\") pod \"9d4f5074-61bb-4e5b-91ce-d8149ddb50f1\" (UID: \"9d4f5074-61bb-4e5b-91ce-d8149ddb50f1\") " Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.984390 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d4f5074-61bb-4e5b-91ce-d8149ddb50f1-config-volume\") pod \"9d4f5074-61bb-4e5b-91ce-d8149ddb50f1\" (UID: \"9d4f5074-61bb-4e5b-91ce-d8149ddb50f1\") " Oct 14 13:17:16 crc kubenswrapper[4725]: I1014 13:17:16.985645 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4f5074-61bb-4e5b-91ce-d8149ddb50f1-config-volume" (OuterVolumeSpecName: "config-volume") pod "9d4f5074-61bb-4e5b-91ce-d8149ddb50f1" (UID: "9d4f5074-61bb-4e5b-91ce-d8149ddb50f1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:17:16 crc kubenswrapper[4725]: E1014 13:17:16.988506 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:17.488439853 +0000 UTC m=+154.336874652 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.015324 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4f5074-61bb-4e5b-91ce-d8149ddb50f1-kube-api-access-wfqsb" (OuterVolumeSpecName: "kube-api-access-wfqsb") pod "9d4f5074-61bb-4e5b-91ce-d8149ddb50f1" (UID: "9d4f5074-61bb-4e5b-91ce-d8149ddb50f1"). InnerVolumeSpecName "kube-api-access-wfqsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.042089 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4f5074-61bb-4e5b-91ce-d8149ddb50f1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9d4f5074-61bb-4e5b-91ce-d8149ddb50f1" (UID: "9d4f5074-61bb-4e5b-91ce-d8149ddb50f1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.050843 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g9pj7"] Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.069354 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fwjbz"] Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.073008 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340795-6q4fq" event={"ID":"9d4f5074-61bb-4e5b-91ce-d8149ddb50f1","Type":"ContainerDied","Data":"152b0b9b75c05f0726ec8e3fed0a3e6f2484cba7f78776d0b5835e3cec9d099a"} Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.073078 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="152b0b9b75c05f0726ec8e3fed0a3e6f2484cba7f78776d0b5835e3cec9d099a" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.073181 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340795-6q4fq" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.085693 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15dbec8a-0d83-438c-b58b-3eb5aafa1f95-utilities\") pod \"redhat-marketplace-g9pj7\" (UID: \"15dbec8a-0d83-438c-b58b-3eb5aafa1f95\") " pod="openshift-marketplace/redhat-marketplace-g9pj7" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.085799 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.085828 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15dbec8a-0d83-438c-b58b-3eb5aafa1f95-catalog-content\") pod \"redhat-marketplace-g9pj7\" (UID: \"15dbec8a-0d83-438c-b58b-3eb5aafa1f95\") " pod="openshift-marketplace/redhat-marketplace-g9pj7" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.085875 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptcqc\" (UniqueName: \"kubernetes.io/projected/15dbec8a-0d83-438c-b58b-3eb5aafa1f95-kube-api-access-ptcqc\") pod \"redhat-marketplace-g9pj7\" (UID: \"15dbec8a-0d83-438c-b58b-3eb5aafa1f95\") " pod="openshift-marketplace/redhat-marketplace-g9pj7" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.085929 4725 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d4f5074-61bb-4e5b-91ce-d8149ddb50f1-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.085943 4725 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d4f5074-61bb-4e5b-91ce-d8149ddb50f1-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.085956 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfqsb\" (UniqueName: \"kubernetes.io/projected/9d4f5074-61bb-4e5b-91ce-d8149ddb50f1-kube-api-access-wfqsb\") on node \"crc\" DevicePath \"\"" Oct 14 13:17:17 crc kubenswrapper[4725]: E1014 13:17:17.086383 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:17.586362207 +0000 UTC m=+154.434797016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.093777 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lj4wj"] Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.101527 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xs2wz" event={"ID":"a4706b0b-daed-451d-814b-bd2aa92c3c11","Type":"ContainerStarted","Data":"3f6ac7b5f8478762cecf6319b5bc0825a6e13741a67a8460bde76526fad402c5"} Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.178851 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"12707dffac01c5936bb15928ade844b11f1a2b0c39112cc7900a51d11b705dee"} Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.187500 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:17 crc kubenswrapper[4725]: E1014 13:17:17.187672 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:17.687636253 +0000 UTC m=+154.536071062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.187859 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptcqc\" (UniqueName: \"kubernetes.io/projected/15dbec8a-0d83-438c-b58b-3eb5aafa1f95-kube-api-access-ptcqc\") pod \"redhat-marketplace-g9pj7\" (UID: \"15dbec8a-0d83-438c-b58b-3eb5aafa1f95\") " pod="openshift-marketplace/redhat-marketplace-g9pj7" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.187907 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15dbec8a-0d83-438c-b58b-3eb5aafa1f95-utilities\") pod \"redhat-marketplace-g9pj7\" (UID: \"15dbec8a-0d83-438c-b58b-3eb5aafa1f95\") " pod="openshift-marketplace/redhat-marketplace-g9pj7" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.187974 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.188002 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15dbec8a-0d83-438c-b58b-3eb5aafa1f95-catalog-content\") pod \"redhat-marketplace-g9pj7\" (UID: \"15dbec8a-0d83-438c-b58b-3eb5aafa1f95\") " pod="openshift-marketplace/redhat-marketplace-g9pj7" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.188545 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15dbec8a-0d83-438c-b58b-3eb5aafa1f95-catalog-content\") pod \"redhat-marketplace-g9pj7\" (UID: \"15dbec8a-0d83-438c-b58b-3eb5aafa1f95\") " pod="openshift-marketplace/redhat-marketplace-g9pj7" Oct 14 13:17:17 crc kubenswrapper[4725]: E1014 13:17:17.189538 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:17.689526815 +0000 UTC m=+154.537961614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.189665 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15dbec8a-0d83-438c-b58b-3eb5aafa1f95-utilities\") pod \"redhat-marketplace-g9pj7\" (UID: \"15dbec8a-0d83-438c-b58b-3eb5aafa1f95\") " pod="openshift-marketplace/redhat-marketplace-g9pj7" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.215844 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptcqc\" (UniqueName: \"kubernetes.io/projected/15dbec8a-0d83-438c-b58b-3eb5aafa1f95-kube-api-access-ptcqc\") pod \"redhat-marketplace-g9pj7\" (UID: \"15dbec8a-0d83-438c-b58b-3eb5aafa1f95\") " pod="openshift-marketplace/redhat-marketplace-g9pj7" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.216159 4725 generic.go:334] "Generic (PLEG): container finished" podID="57b7ca76-c75e-44f5-b428-6233a92bca51" containerID="b58894bbab9a1204c8a4f2e9ead2b2379312196bf39cc462d135fbd887985a64" exitCode=0 Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.216301 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fsmm" event={"ID":"57b7ca76-c75e-44f5-b428-6233a92bca51","Type":"ContainerDied","Data":"b58894bbab9a1204c8a4f2e9ead2b2379312196bf39cc462d135fbd887985a64"} Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.216335 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fsmm" event={"ID":"57b7ca76-c75e-44f5-b428-6233a92bca51","Type":"ContainerStarted","Data":"15ca8843bf2221923b2a8edcc7a1bf085d099a2fe09cd34c97245e2e2084e561"} Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.218265 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.219063 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.219934 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"6951608e141fa8a46a6aadb126ae8b66f6f1bf90f4430ee11497d47c84c26a76"} Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.220489 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.227118 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.235836 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jnvzr" event={"ID":"b372f112-486d-4848-a78f-552a485abacc","Type":"ContainerStarted","Data":"e3780b3c2de2491c4813f5dd686b3dbf229bdf6a139f5dc019cd8de0b0bed74b"} Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.288865 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:17 crc kubenswrapper[4725]: E1014 13:17:17.289348 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:17.789287109 +0000 UTC m=+154.637721918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.290048 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.292496 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:17 crc kubenswrapper[4725]: E1014 13:17:17.293032 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:17.793009423 +0000 UTC m=+154.641444232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.327835 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nll7b"] Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.329118 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nll7b" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.331581 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g9pj7" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.345874 4725 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.353742 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nll7b"] Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.366642 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-dbshk" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.367182 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-dbshk" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.369334 4725 patch_prober.go:28] interesting pod/console-f9d7485db-dbshk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.18:8443/health\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.369622 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-dbshk" podUID="9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.18:8443/health\": dial tcp 10.217.0.18:8443: connect: connection refused" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.396618 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:17 crc kubenswrapper[4725]: E1014 13:17:17.399431 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:17.899397981 +0000 UTC m=+154.747832990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.474674 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-4v6wb container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.474753 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4v6wb" podUID="3fd3f30a-c1ac-4a8f-8d95-9d4165d608a2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.475207 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-4v6wb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.475226 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4v6wb" podUID="3fd3f30a-c1ac-4a8f-8d95-9d4165d608a2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.499251 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.499410 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qhmq\" (UniqueName: \"kubernetes.io/projected/72e152ab-fabb-4551-b0f9-7c520824edef-kube-api-access-6qhmq\") pod \"redhat-marketplace-nll7b\" (UID: \"72e152ab-fabb-4551-b0f9-7c520824edef\") " pod="openshift-marketplace/redhat-marketplace-nll7b" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.499451 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72e152ab-fabb-4551-b0f9-7c520824edef-utilities\") pod \"redhat-marketplace-nll7b\" (UID: \"72e152ab-fabb-4551-b0f9-7c520824edef\") " pod="openshift-marketplace/redhat-marketplace-nll7b" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.499591 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72e152ab-fabb-4551-b0f9-7c520824edef-catalog-content\") pod \"redhat-marketplace-nll7b\" (UID: \"72e152ab-fabb-4551-b0f9-7c520824edef\") " pod="openshift-marketplace/redhat-marketplace-nll7b" Oct 14 13:17:17 crc kubenswrapper[4725]: E1014 13:17:17.500700 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:18.000687078 +0000 UTC m=+154.849121887 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.508338 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.508406 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.520495 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.523796 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hb586" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.550048 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ftgzg" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.600799 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.601174 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qhmq\" (UniqueName: \"kubernetes.io/projected/72e152ab-fabb-4551-b0f9-7c520824edef-kube-api-access-6qhmq\") pod \"redhat-marketplace-nll7b\" (UID: \"72e152ab-fabb-4551-b0f9-7c520824edef\") " pod="openshift-marketplace/redhat-marketplace-nll7b" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.601223 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72e152ab-fabb-4551-b0f9-7c520824edef-utilities\") pod \"redhat-marketplace-nll7b\" (UID: \"72e152ab-fabb-4551-b0f9-7c520824edef\") " pod="openshift-marketplace/redhat-marketplace-nll7b" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.601265 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72e152ab-fabb-4551-b0f9-7c520824edef-catalog-content\") pod \"redhat-marketplace-nll7b\" (UID: \"72e152ab-fabb-4551-b0f9-7c520824edef\") " pod="openshift-marketplace/redhat-marketplace-nll7b" Oct 14 13:17:17 crc kubenswrapper[4725]: E1014 13:17:17.601689 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:18.101642165 +0000 UTC m=+154.950077114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.602021 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72e152ab-fabb-4551-b0f9-7c520824edef-catalog-content\") pod \"redhat-marketplace-nll7b\" (UID: \"72e152ab-fabb-4551-b0f9-7c520824edef\") " pod="openshift-marketplace/redhat-marketplace-nll7b" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.602106 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72e152ab-fabb-4551-b0f9-7c520824edef-utilities\") pod \"redhat-marketplace-nll7b\" (UID: \"72e152ab-fabb-4551-b0f9-7c520824edef\") " pod="openshift-marketplace/redhat-marketplace-nll7b" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.641052 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qhmq\" (UniqueName: \"kubernetes.io/projected/72e152ab-fabb-4551-b0f9-7c520824edef-kube-api-access-6qhmq\") pod \"redhat-marketplace-nll7b\" (UID: \"72e152ab-fabb-4551-b0f9-7c520824edef\") " pod="openshift-marketplace/redhat-marketplace-nll7b" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.680488 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g9pj7"] Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.705552 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:17 crc kubenswrapper[4725]: E1014 13:17:17.706053 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:18.206026369 +0000 UTC m=+155.054461178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.762486 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-twd2b" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.772406 4725 patch_prober.go:28] interesting pod/router-default-5444994796-twd2b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:17:17 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 14 13:17:17 crc kubenswrapper[4725]: [+]process-running ok Oct 14 13:17:17 crc kubenswrapper[4725]: healthz check failed Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.772505 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-twd2b" podUID="23ffd8bd-e6ca-4824-900e-08ba0cd80041" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.808297 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:17 crc kubenswrapper[4725]: E1014 13:17:17.808543 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:18.308507908 +0000 UTC m=+155.156942717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.808803 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:17 crc kubenswrapper[4725]: E1014 13:17:17.809237 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:18.309219199 +0000 UTC m=+155.157654008 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.910385 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:17 crc kubenswrapper[4725]: E1014 13:17:17.910595 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:18.410565807 +0000 UTC m=+155.259000616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.910843 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:17 crc kubenswrapper[4725]: E1014 13:17:17.911230 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:18.411212815 +0000 UTC m=+155.259647624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:17 crc kubenswrapper[4725]: I1014 13:17:17.932006 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nll7b" Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.012275 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:18 crc kubenswrapper[4725]: E1014 13:17:18.012695 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:18.512652936 +0000 UTC m=+155.361087755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.013010 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:18 crc kubenswrapper[4725]: E1014 13:17:18.013695 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-14 13:17:18.513680655 +0000 UTC m=+155.362115464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96hd4" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.114409 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:18 crc kubenswrapper[4725]: E1014 13:17:18.114979 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-14 13:17:18.614954151 +0000 UTC m=+155.463388960 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.125770 4725 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-10-14T13:17:17.345908124Z","Handler":null,"Name":""} Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.133973 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nll7b"] Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.189885 4725 patch_prober.go:28] interesting pod/apiserver-76f77b778f-p8t6g container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 14 13:17:18 crc kubenswrapper[4725]: [+]log ok Oct 14 13:17:18 crc kubenswrapper[4725]: [+]etcd ok Oct 14 13:17:18 crc kubenswrapper[4725]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 14 13:17:18 crc kubenswrapper[4725]: [+]poststarthook/generic-apiserver-start-informers ok Oct 14 13:17:18 crc kubenswrapper[4725]: [+]poststarthook/max-in-flight-filter ok Oct 14 13:17:18 crc kubenswrapper[4725]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 14 13:17:18 crc kubenswrapper[4725]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 14 13:17:18 crc kubenswrapper[4725]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Oct 14 13:17:18 crc kubenswrapper[4725]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Oct 14 13:17:18 crc kubenswrapper[4725]: [+]poststarthook/project.openshift.io-projectcache ok Oct 14 13:17:18 crc kubenswrapper[4725]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 14 13:17:18 crc kubenswrapper[4725]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Oct 14 13:17:18 crc kubenswrapper[4725]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 14 13:17:18 crc kubenswrapper[4725]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 14 13:17:18 crc kubenswrapper[4725]: livez check failed Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.189984 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" podUID="42497b95-3bd9-480d-9393-db14108c977e" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.206108 4725 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.206152 4725 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.216541 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.220737 4725 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.220775 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.242346 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nll7b" event={"ID":"72e152ab-fabb-4551-b0f9-7c520824edef","Type":"ContainerStarted","Data":"fdfc38ec0c6ed300772e83e962cf6cad97af68369fd63b021dbaa38d6c5ad12a"} Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.243871 4725 generic.go:334] "Generic (PLEG): container finished" podID="15dbec8a-0d83-438c-b58b-3eb5aafa1f95" containerID="51047a23ff444ea88e3470372bf44ae4252887a2bcb6908890c659b70358935d" exitCode=0 Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.243926 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g9pj7" event={"ID":"15dbec8a-0d83-438c-b58b-3eb5aafa1f95","Type":"ContainerDied","Data":"51047a23ff444ea88e3470372bf44ae4252887a2bcb6908890c659b70358935d"} Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.243944 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g9pj7" event={"ID":"15dbec8a-0d83-438c-b58b-3eb5aafa1f95","Type":"ContainerStarted","Data":"5af0b34ec3f59f2352e60a16b1089f51b5c08dadc923acb6902030370dcf8635"} Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.246291 4725 generic.go:334] "Generic (PLEG): container finished" podID="70da4b97-6b5f-4aae-93d3-ef2593f042c0" containerID="c2188edf3766945e5953b903299bbc7ee9ac4a6d7154e83f86dc1d375ee146f5" exitCode=0 Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.246338 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lj4wj" event={"ID":"70da4b97-6b5f-4aae-93d3-ef2593f042c0","Type":"ContainerDied","Data":"c2188edf3766945e5953b903299bbc7ee9ac4a6d7154e83f86dc1d375ee146f5"} Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.246358 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lj4wj" event={"ID":"70da4b97-6b5f-4aae-93d3-ef2593f042c0","Type":"ContainerStarted","Data":"e5b3c346467ed22b994f4505b2b1a8b3f92b0105ea9b0ab9089e7d2862aab638"} Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.248880 4725 generic.go:334] "Generic (PLEG): container finished" podID="a4706b0b-daed-451d-814b-bd2aa92c3c11" containerID="7e7b30e3cb5671a7cf31e88b4d412f4eb42019ab18c64e8a08ee7672f1e6c553" exitCode=0 Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.248957 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xs2wz" event={"ID":"a4706b0b-daed-451d-814b-bd2aa92c3c11","Type":"ContainerDied","Data":"7e7b30e3cb5671a7cf31e88b4d412f4eb42019ab18c64e8a08ee7672f1e6c553"} Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.251530 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8b02b86e-fa0a-4465-9613-44ab7d201daf","Type":"ContainerStarted","Data":"3b19794d0b4ad067833968890623d9c7412249b9474cea1b7721168e40e582dd"} Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.253052 4725 generic.go:334] "Generic (PLEG): container finished" podID="b4c5e303-50a7-4c4f-835f-651e69aba358" containerID="938b37417736147a73ed125021fffb83138df821b3e9549f00a2c804682d6790" exitCode=0 Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.253136 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwjbz" event={"ID":"b4c5e303-50a7-4c4f-835f-651e69aba358","Type":"ContainerDied","Data":"938b37417736147a73ed125021fffb83138df821b3e9549f00a2c804682d6790"} Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.253171 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwjbz" event={"ID":"b4c5e303-50a7-4c4f-835f-651e69aba358","Type":"ContainerStarted","Data":"7ec7c8109ef331e8468734da5bc0c80907c3ba41440e3e7d20c03aa4d18d303e"} Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.258515 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jnvzr" event={"ID":"b372f112-486d-4848-a78f-552a485abacc","Type":"ContainerStarted","Data":"defb526abfc3f5186fa2c41ce9049904612b418d5708091575488bec0e4815da"} Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.258591 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jnvzr" event={"ID":"b372f112-486d-4848-a78f-552a485abacc","Type":"ContainerStarted","Data":"2247403fdc0fc3d270cc313f0e6b8b36f97f73ff14eca4cd0055d0438d47505a"} Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.273081 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zfnkd" Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.285002 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96hd4\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.317906 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.329109 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-jnvzr" podStartSLOduration=14.329088366 podStartE2EDuration="14.329088366s" podCreationTimestamp="2025-10-14 13:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:18.300940083 +0000 UTC m=+155.149374892" watchObservedRunningTime="2025-10-14 13:17:18.329088366 +0000 UTC m=+155.177523175" Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.332602 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dw67l"] Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.333773 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dw67l" Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.353585 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.362830 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dw67l"] Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.366894 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.397613 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.522105 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc7e8275-cf1b-4d13-9ba5-95f65a961049-catalog-content\") pod \"redhat-operators-dw67l\" (UID: \"cc7e8275-cf1b-4d13-9ba5-95f65a961049\") " pod="openshift-marketplace/redhat-operators-dw67l" Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.522799 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vvxh\" (UniqueName: \"kubernetes.io/projected/cc7e8275-cf1b-4d13-9ba5-95f65a961049-kube-api-access-6vvxh\") pod \"redhat-operators-dw67l\" (UID: \"cc7e8275-cf1b-4d13-9ba5-95f65a961049\") " pod="openshift-marketplace/redhat-operators-dw67l" Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.522836 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc7e8275-cf1b-4d13-9ba5-95f65a961049-utilities\") pod \"redhat-operators-dw67l\" (UID: \"cc7e8275-cf1b-4d13-9ba5-95f65a961049\") " pod="openshift-marketplace/redhat-operators-dw67l" Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.625833 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vvxh\" (UniqueName: \"kubernetes.io/projected/cc7e8275-cf1b-4d13-9ba5-95f65a961049-kube-api-access-6vvxh\") pod \"redhat-operators-dw67l\" (UID: \"cc7e8275-cf1b-4d13-9ba5-95f65a961049\") " pod="openshift-marketplace/redhat-operators-dw67l" Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.625899 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc7e8275-cf1b-4d13-9ba5-95f65a961049-utilities\") pod \"redhat-operators-dw67l\" (UID: \"cc7e8275-cf1b-4d13-9ba5-95f65a961049\") " pod="openshift-marketplace/redhat-operators-dw67l" Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.625950 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc7e8275-cf1b-4d13-9ba5-95f65a961049-catalog-content\") pod \"redhat-operators-dw67l\" (UID: \"cc7e8275-cf1b-4d13-9ba5-95f65a961049\") " pod="openshift-marketplace/redhat-operators-dw67l" Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.629769 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc7e8275-cf1b-4d13-9ba5-95f65a961049-catalog-content\") pod \"redhat-operators-dw67l\" (UID: \"cc7e8275-cf1b-4d13-9ba5-95f65a961049\") " pod="openshift-marketplace/redhat-operators-dw67l" Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.633613 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc7e8275-cf1b-4d13-9ba5-95f65a961049-utilities\") pod \"redhat-operators-dw67l\" (UID: \"cc7e8275-cf1b-4d13-9ba5-95f65a961049\") " pod="openshift-marketplace/redhat-operators-dw67l" Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.665183 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vvxh\" (UniqueName: \"kubernetes.io/projected/cc7e8275-cf1b-4d13-9ba5-95f65a961049-kube-api-access-6vvxh\") pod \"redhat-operators-dw67l\" (UID: \"cc7e8275-cf1b-4d13-9ba5-95f65a961049\") " pod="openshift-marketplace/redhat-operators-dw67l" Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.739064 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bkn7h"] Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.742825 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bkn7h" Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.756421 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bkn7h"] Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.769789 4725 patch_prober.go:28] interesting pod/router-default-5444994796-twd2b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:17:18 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 14 13:17:18 crc kubenswrapper[4725]: [+]process-running ok Oct 14 13:17:18 crc kubenswrapper[4725]: healthz check failed Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.769844 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-twd2b" podUID="23ffd8bd-e6ca-4824-900e-08ba0cd80041" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.776004 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dw67l" Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.813373 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-96hd4"] Oct 14 13:17:18 crc kubenswrapper[4725]: W1014 13:17:18.848994 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7ec7d62_80ef_4c08_9927_a66bc3ee6cb3.slice/crio-310f5a209ae1d5ef1379ffa93113145d5368934df740c6190be135120d919648 WatchSource:0}: Error finding container 310f5a209ae1d5ef1379ffa93113145d5368934df740c6190be135120d919648: Status 404 returned error can't find the container with id 310f5a209ae1d5ef1379ffa93113145d5368934df740c6190be135120d919648 Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.932213 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds6lb\" (UniqueName: \"kubernetes.io/projected/04c32971-b976-46b3-b96c-2bcc703b4dd0-kube-api-access-ds6lb\") pod \"redhat-operators-bkn7h\" (UID: \"04c32971-b976-46b3-b96c-2bcc703b4dd0\") " pod="openshift-marketplace/redhat-operators-bkn7h" Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.932305 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04c32971-b976-46b3-b96c-2bcc703b4dd0-utilities\") pod \"redhat-operators-bkn7h\" (UID: \"04c32971-b976-46b3-b96c-2bcc703b4dd0\") " pod="openshift-marketplace/redhat-operators-bkn7h" Oct 14 13:17:18 crc kubenswrapper[4725]: I1014 13:17:18.932376 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04c32971-b976-46b3-b96c-2bcc703b4dd0-catalog-content\") pod \"redhat-operators-bkn7h\" (UID: \"04c32971-b976-46b3-b96c-2bcc703b4dd0\") " pod="openshift-marketplace/redhat-operators-bkn7h" Oct 14 13:17:19 crc kubenswrapper[4725]: I1014 13:17:19.033949 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04c32971-b976-46b3-b96c-2bcc703b4dd0-utilities\") pod \"redhat-operators-bkn7h\" (UID: \"04c32971-b976-46b3-b96c-2bcc703b4dd0\") " pod="openshift-marketplace/redhat-operators-bkn7h" Oct 14 13:17:19 crc kubenswrapper[4725]: I1014 13:17:19.034036 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04c32971-b976-46b3-b96c-2bcc703b4dd0-catalog-content\") pod \"redhat-operators-bkn7h\" (UID: \"04c32971-b976-46b3-b96c-2bcc703b4dd0\") " pod="openshift-marketplace/redhat-operators-bkn7h" Oct 14 13:17:19 crc kubenswrapper[4725]: I1014 13:17:19.034095 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds6lb\" (UniqueName: \"kubernetes.io/projected/04c32971-b976-46b3-b96c-2bcc703b4dd0-kube-api-access-ds6lb\") pod \"redhat-operators-bkn7h\" (UID: \"04c32971-b976-46b3-b96c-2bcc703b4dd0\") " pod="openshift-marketplace/redhat-operators-bkn7h" Oct 14 13:17:19 crc kubenswrapper[4725]: I1014 13:17:19.035130 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04c32971-b976-46b3-b96c-2bcc703b4dd0-utilities\") pod \"redhat-operators-bkn7h\" (UID: \"04c32971-b976-46b3-b96c-2bcc703b4dd0\") " pod="openshift-marketplace/redhat-operators-bkn7h" Oct 14 13:17:19 crc kubenswrapper[4725]: I1014 13:17:19.035192 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04c32971-b976-46b3-b96c-2bcc703b4dd0-catalog-content\") pod \"redhat-operators-bkn7h\" (UID: \"04c32971-b976-46b3-b96c-2bcc703b4dd0\") " pod="openshift-marketplace/redhat-operators-bkn7h" Oct 14 13:17:19 crc kubenswrapper[4725]: I1014 13:17:19.063062 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds6lb\" (UniqueName: \"kubernetes.io/projected/04c32971-b976-46b3-b96c-2bcc703b4dd0-kube-api-access-ds6lb\") pod \"redhat-operators-bkn7h\" (UID: \"04c32971-b976-46b3-b96c-2bcc703b4dd0\") " pod="openshift-marketplace/redhat-operators-bkn7h" Oct 14 13:17:19 crc kubenswrapper[4725]: I1014 13:17:19.079000 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dw67l"] Oct 14 13:17:19 crc kubenswrapper[4725]: I1014 13:17:19.268800 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dw67l" event={"ID":"cc7e8275-cf1b-4d13-9ba5-95f65a961049","Type":"ContainerStarted","Data":"081b9d8e0ec3341dc0e9edcd905af1442e183f37f81b377ed88b47aad160e832"} Oct 14 13:17:19 crc kubenswrapper[4725]: I1014 13:17:19.273646 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8b02b86e-fa0a-4465-9613-44ab7d201daf","Type":"ContainerStarted","Data":"897e85d39d394c279839bf9347e8b5ab7a7ea825bf1f4273b9f0590f8fc44111"} Oct 14 13:17:19 crc kubenswrapper[4725]: I1014 13:17:19.279501 4725 generic.go:334] "Generic (PLEG): container finished" podID="72e152ab-fabb-4551-b0f9-7c520824edef" containerID="d0cc1ad43b229aca9ad385df1dc473c715282ad188dded83376fd354ce0f50e6" exitCode=0 Oct 14 13:17:19 crc kubenswrapper[4725]: I1014 13:17:19.279690 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nll7b" event={"ID":"72e152ab-fabb-4551-b0f9-7c520824edef","Type":"ContainerDied","Data":"d0cc1ad43b229aca9ad385df1dc473c715282ad188dded83376fd354ce0f50e6"} Oct 14 13:17:19 crc kubenswrapper[4725]: I1014 13:17:19.299698 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.299670538 podStartE2EDuration="3.299670538s" podCreationTimestamp="2025-10-14 13:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:19.296259712 +0000 UTC m=+156.144694521" watchObservedRunningTime="2025-10-14 13:17:19.299670538 +0000 UTC m=+156.148105347" Oct 14 13:17:19 crc kubenswrapper[4725]: I1014 13:17:19.332581 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" event={"ID":"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3","Type":"ContainerStarted","Data":"4bcba93452e78d8e294d0f0d77ab15a6fbce67060187ef59b8b4c92498e1a2fa"} Oct 14 13:17:19 crc kubenswrapper[4725]: I1014 13:17:19.332632 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" event={"ID":"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3","Type":"ContainerStarted","Data":"310f5a209ae1d5ef1379ffa93113145d5368934df740c6190be135120d919648"} Oct 14 13:17:19 crc kubenswrapper[4725]: I1014 13:17:19.366751 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bkn7h" Oct 14 13:17:19 crc kubenswrapper[4725]: I1014 13:17:19.773290 4725 patch_prober.go:28] interesting pod/router-default-5444994796-twd2b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:17:19 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 14 13:17:19 crc kubenswrapper[4725]: [+]process-running ok Oct 14 13:17:19 crc kubenswrapper[4725]: healthz check failed Oct 14 13:17:19 crc kubenswrapper[4725]: I1014 13:17:19.773806 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-twd2b" podUID="23ffd8bd-e6ca-4824-900e-08ba0cd80041" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:17:19 crc kubenswrapper[4725]: I1014 13:17:19.799063 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bkn7h"] Oct 14 13:17:19 crc kubenswrapper[4725]: W1014 13:17:19.884055 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04c32971_b976_46b3_b96c_2bcc703b4dd0.slice/crio-3e21b3f86352c1bafd583b0a2b1a1054dacb7e0dff1d1994f71e282feff9a6f3 WatchSource:0}: Error finding container 3e21b3f86352c1bafd583b0a2b1a1054dacb7e0dff1d1994f71e282feff9a6f3: Status 404 returned error can't find the container with id 3e21b3f86352c1bafd583b0a2b1a1054dacb7e0dff1d1994f71e282feff9a6f3 Oct 14 13:17:19 crc kubenswrapper[4725]: I1014 13:17:19.967965 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Oct 14 13:17:20 crc kubenswrapper[4725]: I1014 13:17:20.104971 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 14 13:17:20 crc kubenswrapper[4725]: I1014 13:17:20.106164 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 13:17:20 crc kubenswrapper[4725]: I1014 13:17:20.111210 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 14 13:17:20 crc kubenswrapper[4725]: I1014 13:17:20.111654 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Oct 14 13:17:20 crc kubenswrapper[4725]: I1014 13:17:20.111860 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Oct 14 13:17:20 crc kubenswrapper[4725]: I1014 13:17:20.252929 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4687710d-b374-4609-ba37-39a6c13610e8-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4687710d-b374-4609-ba37-39a6c13610e8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 13:17:20 crc kubenswrapper[4725]: I1014 13:17:20.253382 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4687710d-b374-4609-ba37-39a6c13610e8-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4687710d-b374-4609-ba37-39a6c13610e8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 13:17:20 crc kubenswrapper[4725]: I1014 13:17:20.343719 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkn7h" event={"ID":"04c32971-b976-46b3-b96c-2bcc703b4dd0","Type":"ContainerStarted","Data":"3e21b3f86352c1bafd583b0a2b1a1054dacb7e0dff1d1994f71e282feff9a6f3"} Oct 14 13:17:20 crc kubenswrapper[4725]: I1014 13:17:20.348101 4725 generic.go:334] "Generic (PLEG): container finished" podID="cc7e8275-cf1b-4d13-9ba5-95f65a961049" containerID="bcc9635aab3450c4443c53c039c8446770ffec1359a9b0f7286a1c7d4336e2b2" exitCode=0 Oct 14 13:17:20 crc kubenswrapper[4725]: I1014 13:17:20.348215 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dw67l" event={"ID":"cc7e8275-cf1b-4d13-9ba5-95f65a961049","Type":"ContainerDied","Data":"bcc9635aab3450c4443c53c039c8446770ffec1359a9b0f7286a1c7d4336e2b2"} Oct 14 13:17:20 crc kubenswrapper[4725]: I1014 13:17:20.355486 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4687710d-b374-4609-ba37-39a6c13610e8-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4687710d-b374-4609-ba37-39a6c13610e8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 13:17:20 crc kubenswrapper[4725]: I1014 13:17:20.355553 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4687710d-b374-4609-ba37-39a6c13610e8-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4687710d-b374-4609-ba37-39a6c13610e8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 13:17:20 crc kubenswrapper[4725]: I1014 13:17:20.357532 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4687710d-b374-4609-ba37-39a6c13610e8-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4687710d-b374-4609-ba37-39a6c13610e8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 13:17:20 crc kubenswrapper[4725]: I1014 13:17:20.365656 4725 generic.go:334] "Generic (PLEG): container finished" podID="8b02b86e-fa0a-4465-9613-44ab7d201daf" containerID="897e85d39d394c279839bf9347e8b5ab7a7ea825bf1f4273b9f0590f8fc44111" exitCode=0 Oct 14 13:17:20 crc kubenswrapper[4725]: I1014 13:17:20.366561 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8b02b86e-fa0a-4465-9613-44ab7d201daf","Type":"ContainerDied","Data":"897e85d39d394c279839bf9347e8b5ab7a7ea825bf1f4273b9f0590f8fc44111"} Oct 14 13:17:20 crc kubenswrapper[4725]: I1014 13:17:20.366607 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:20 crc kubenswrapper[4725]: I1014 13:17:20.387318 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4687710d-b374-4609-ba37-39a6c13610e8-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4687710d-b374-4609-ba37-39a6c13610e8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 13:17:20 crc kubenswrapper[4725]: I1014 13:17:20.402404 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" podStartSLOduration=135.402376913 podStartE2EDuration="2m15.402376913s" podCreationTimestamp="2025-10-14 13:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:20.397718554 +0000 UTC m=+157.246153373" watchObservedRunningTime="2025-10-14 13:17:20.402376913 +0000 UTC m=+157.250811722" Oct 14 13:17:20 crc kubenswrapper[4725]: I1014 13:17:20.439782 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 13:17:20 crc kubenswrapper[4725]: I1014 13:17:20.734877 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 14 13:17:20 crc kubenswrapper[4725]: W1014 13:17:20.750794 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4687710d_b374_4609_ba37_39a6c13610e8.slice/crio-3b2b01b26f2f782a6923bfc1ec3aa0a56b6cb9622f4515f58939e584314f752c WatchSource:0}: Error finding container 3b2b01b26f2f782a6923bfc1ec3aa0a56b6cb9622f4515f58939e584314f752c: Status 404 returned error can't find the container with id 3b2b01b26f2f782a6923bfc1ec3aa0a56b6cb9622f4515f58939e584314f752c Oct 14 13:17:20 crc kubenswrapper[4725]: I1014 13:17:20.767818 4725 patch_prober.go:28] interesting pod/router-default-5444994796-twd2b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:17:20 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 14 13:17:20 crc kubenswrapper[4725]: [+]process-running ok Oct 14 13:17:20 crc kubenswrapper[4725]: healthz check failed Oct 14 13:17:20 crc kubenswrapper[4725]: I1014 13:17:20.767886 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-twd2b" podUID="23ffd8bd-e6ca-4824-900e-08ba0cd80041" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:17:21 crc kubenswrapper[4725]: I1014 13:17:21.376895 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4687710d-b374-4609-ba37-39a6c13610e8","Type":"ContainerStarted","Data":"3b2b01b26f2f782a6923bfc1ec3aa0a56b6cb9622f4515f58939e584314f752c"} Oct 14 13:17:21 crc kubenswrapper[4725]: I1014 13:17:21.386218 4725 generic.go:334] "Generic (PLEG): container finished" podID="04c32971-b976-46b3-b96c-2bcc703b4dd0" containerID="c074d4b5db77768221b55db33cad78aa90de7a41afcc8c6a1cc52681ad25e5d8" exitCode=0 Oct 14 13:17:21 crc kubenswrapper[4725]: I1014 13:17:21.390043 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkn7h" event={"ID":"04c32971-b976-46b3-b96c-2bcc703b4dd0","Type":"ContainerDied","Data":"c074d4b5db77768221b55db33cad78aa90de7a41afcc8c6a1cc52681ad25e5d8"} Oct 14 13:17:21 crc kubenswrapper[4725]: I1014 13:17:21.766922 4725 patch_prober.go:28] interesting pod/router-default-5444994796-twd2b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:17:21 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 14 13:17:21 crc kubenswrapper[4725]: [+]process-running ok Oct 14 13:17:21 crc kubenswrapper[4725]: healthz check failed Oct 14 13:17:21 crc kubenswrapper[4725]: I1014 13:17:21.767381 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-twd2b" podUID="23ffd8bd-e6ca-4824-900e-08ba0cd80041" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:17:21 crc kubenswrapper[4725]: I1014 13:17:21.792162 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 13:17:21 crc kubenswrapper[4725]: I1014 13:17:21.894075 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b02b86e-fa0a-4465-9613-44ab7d201daf-kubelet-dir\") pod \"8b02b86e-fa0a-4465-9613-44ab7d201daf\" (UID: \"8b02b86e-fa0a-4465-9613-44ab7d201daf\") " Oct 14 13:17:21 crc kubenswrapper[4725]: I1014 13:17:21.894235 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b02b86e-fa0a-4465-9613-44ab7d201daf-kube-api-access\") pod \"8b02b86e-fa0a-4465-9613-44ab7d201daf\" (UID: \"8b02b86e-fa0a-4465-9613-44ab7d201daf\") " Oct 14 13:17:21 crc kubenswrapper[4725]: I1014 13:17:21.894205 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b02b86e-fa0a-4465-9613-44ab7d201daf-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8b02b86e-fa0a-4465-9613-44ab7d201daf" (UID: "8b02b86e-fa0a-4465-9613-44ab7d201daf"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:17:21 crc kubenswrapper[4725]: I1014 13:17:21.894661 4725 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b02b86e-fa0a-4465-9613-44ab7d201daf-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 14 13:17:21 crc kubenswrapper[4725]: I1014 13:17:21.911412 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b02b86e-fa0a-4465-9613-44ab7d201daf-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8b02b86e-fa0a-4465-9613-44ab7d201daf" (UID: "8b02b86e-fa0a-4465-9613-44ab7d201daf"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:17:22 crc kubenswrapper[4725]: I1014 13:17:22.000231 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b02b86e-fa0a-4465-9613-44ab7d201daf-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 14 13:17:22 crc kubenswrapper[4725]: I1014 13:17:22.418369 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4687710d-b374-4609-ba37-39a6c13610e8","Type":"ContainerStarted","Data":"deec0357b7c5b756dab4b25dfe877b4f8531f84a8f35d63cf09c0f09c6e5338a"} Oct 14 13:17:22 crc kubenswrapper[4725]: I1014 13:17:22.430918 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8b02b86e-fa0a-4465-9613-44ab7d201daf","Type":"ContainerDied","Data":"3b19794d0b4ad067833968890623d9c7412249b9474cea1b7721168e40e582dd"} Oct 14 13:17:22 crc kubenswrapper[4725]: I1014 13:17:22.431404 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b19794d0b4ad067833968890623d9c7412249b9474cea1b7721168e40e582dd" Oct 14 13:17:22 crc kubenswrapper[4725]: I1014 13:17:22.431675 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 14 13:17:22 crc kubenswrapper[4725]: I1014 13:17:22.437611 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.437587402 podStartE2EDuration="2.437587402s" podCreationTimestamp="2025-10-14 13:17:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:17:22.43572159 +0000 UTC m=+159.284156419" watchObservedRunningTime="2025-10-14 13:17:22.437587402 +0000 UTC m=+159.286022231" Oct 14 13:17:22 crc kubenswrapper[4725]: I1014 13:17:22.515308 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:22 crc kubenswrapper[4725]: I1014 13:17:22.522332 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-p8t6g" Oct 14 13:17:22 crc kubenswrapper[4725]: I1014 13:17:22.765069 4725 patch_prober.go:28] interesting pod/router-default-5444994796-twd2b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:17:22 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 14 13:17:22 crc kubenswrapper[4725]: [+]process-running ok Oct 14 13:17:22 crc kubenswrapper[4725]: healthz check failed Oct 14 13:17:22 crc kubenswrapper[4725]: I1014 13:17:22.765416 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-twd2b" podUID="23ffd8bd-e6ca-4824-900e-08ba0cd80041" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:17:22 crc kubenswrapper[4725]: I1014 13:17:22.931315 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-zbjgv" Oct 14 13:17:23 crc kubenswrapper[4725]: I1014 13:17:23.456179 4725 generic.go:334] "Generic (PLEG): container finished" podID="4687710d-b374-4609-ba37-39a6c13610e8" containerID="deec0357b7c5b756dab4b25dfe877b4f8531f84a8f35d63cf09c0f09c6e5338a" exitCode=0 Oct 14 13:17:23 crc kubenswrapper[4725]: I1014 13:17:23.457190 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4687710d-b374-4609-ba37-39a6c13610e8","Type":"ContainerDied","Data":"deec0357b7c5b756dab4b25dfe877b4f8531f84a8f35d63cf09c0f09c6e5338a"} Oct 14 13:17:23 crc kubenswrapper[4725]: I1014 13:17:23.764997 4725 patch_prober.go:28] interesting pod/router-default-5444994796-twd2b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:17:23 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 14 13:17:23 crc kubenswrapper[4725]: [+]process-running ok Oct 14 13:17:23 crc kubenswrapper[4725]: healthz check failed Oct 14 13:17:23 crc kubenswrapper[4725]: I1014 13:17:23.765067 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-twd2b" podUID="23ffd8bd-e6ca-4824-900e-08ba0cd80041" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:17:24 crc kubenswrapper[4725]: I1014 13:17:24.766486 4725 patch_prober.go:28] interesting pod/router-default-5444994796-twd2b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:17:24 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 14 13:17:24 crc kubenswrapper[4725]: [+]process-running ok Oct 14 13:17:24 crc kubenswrapper[4725]: healthz check failed Oct 14 13:17:24 crc kubenswrapper[4725]: I1014 13:17:24.767026 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-twd2b" podUID="23ffd8bd-e6ca-4824-900e-08ba0cd80041" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:17:25 crc kubenswrapper[4725]: I1014 13:17:25.765273 4725 patch_prober.go:28] interesting pod/router-default-5444994796-twd2b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 14 13:17:25 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Oct 14 13:17:25 crc kubenswrapper[4725]: [+]process-running ok Oct 14 13:17:25 crc kubenswrapper[4725]: healthz check failed Oct 14 13:17:25 crc kubenswrapper[4725]: I1014 13:17:25.765350 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-twd2b" podUID="23ffd8bd-e6ca-4824-900e-08ba0cd80041" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 14 13:17:26 crc kubenswrapper[4725]: I1014 13:17:26.776327 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-twd2b" Oct 14 13:17:26 crc kubenswrapper[4725]: I1014 13:17:26.781157 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-twd2b" Oct 14 13:17:27 crc kubenswrapper[4725]: I1014 13:17:27.367862 4725 patch_prober.go:28] interesting pod/console-f9d7485db-dbshk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.18:8443/health\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Oct 14 13:17:27 crc kubenswrapper[4725]: I1014 13:17:27.368233 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-dbshk" podUID="9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.18:8443/health\": dial tcp 10.217.0.18:8443: connect: connection refused" Oct 14 13:17:27 crc kubenswrapper[4725]: I1014 13:17:27.473595 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-4v6wb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Oct 14 13:17:27 crc kubenswrapper[4725]: I1014 13:17:27.473625 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-4v6wb container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Oct 14 13:17:27 crc kubenswrapper[4725]: I1014 13:17:27.473658 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4v6wb" podUID="3fd3f30a-c1ac-4a8f-8d95-9d4165d608a2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Oct 14 13:17:27 crc kubenswrapper[4725]: I1014 13:17:27.473771 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4v6wb" podUID="3fd3f30a-c1ac-4a8f-8d95-9d4165d608a2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Oct 14 13:17:27 crc kubenswrapper[4725]: I1014 13:17:27.734497 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b-metrics-certs\") pod \"network-metrics-daemon-cxcmw\" (UID: \"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\") " pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:17:27 crc kubenswrapper[4725]: I1014 13:17:27.745257 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b-metrics-certs\") pod \"network-metrics-daemon-cxcmw\" (UID: \"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b\") " pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:17:27 crc kubenswrapper[4725]: I1014 13:17:27.938220 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cxcmw" Oct 14 13:17:32 crc kubenswrapper[4725]: I1014 13:17:32.520390 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 13:17:32 crc kubenswrapper[4725]: I1014 13:17:32.520870 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 13:17:33 crc kubenswrapper[4725]: I1014 13:17:33.776473 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 13:17:33 crc kubenswrapper[4725]: I1014 13:17:33.838349 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4687710d-b374-4609-ba37-39a6c13610e8-kube-api-access\") pod \"4687710d-b374-4609-ba37-39a6c13610e8\" (UID: \"4687710d-b374-4609-ba37-39a6c13610e8\") " Oct 14 13:17:33 crc kubenswrapper[4725]: I1014 13:17:33.838401 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4687710d-b374-4609-ba37-39a6c13610e8-kubelet-dir\") pod \"4687710d-b374-4609-ba37-39a6c13610e8\" (UID: \"4687710d-b374-4609-ba37-39a6c13610e8\") " Oct 14 13:17:33 crc kubenswrapper[4725]: I1014 13:17:33.838704 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4687710d-b374-4609-ba37-39a6c13610e8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4687710d-b374-4609-ba37-39a6c13610e8" (UID: "4687710d-b374-4609-ba37-39a6c13610e8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:17:33 crc kubenswrapper[4725]: I1014 13:17:33.845871 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4687710d-b374-4609-ba37-39a6c13610e8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4687710d-b374-4609-ba37-39a6c13610e8" (UID: "4687710d-b374-4609-ba37-39a6c13610e8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:17:33 crc kubenswrapper[4725]: I1014 13:17:33.939899 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4687710d-b374-4609-ba37-39a6c13610e8-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 14 13:17:33 crc kubenswrapper[4725]: I1014 13:17:33.939944 4725 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4687710d-b374-4609-ba37-39a6c13610e8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 14 13:17:34 crc kubenswrapper[4725]: I1014 13:17:34.615153 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4687710d-b374-4609-ba37-39a6c13610e8","Type":"ContainerDied","Data":"3b2b01b26f2f782a6923bfc1ec3aa0a56b6cb9622f4515f58939e584314f752c"} Oct 14 13:17:34 crc kubenswrapper[4725]: I1014 13:17:34.615213 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b2b01b26f2f782a6923bfc1ec3aa0a56b6cb9622f4515f58939e584314f752c" Oct 14 13:17:34 crc kubenswrapper[4725]: I1014 13:17:34.615209 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 14 13:17:37 crc kubenswrapper[4725]: I1014 13:17:37.376856 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-dbshk" Oct 14 13:17:37 crc kubenswrapper[4725]: I1014 13:17:37.382586 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-dbshk" Oct 14 13:17:37 crc kubenswrapper[4725]: I1014 13:17:37.531885 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-4v6wb" Oct 14 13:17:38 crc kubenswrapper[4725]: I1014 13:17:38.403826 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:17:44 crc kubenswrapper[4725]: E1014 13:17:44.243665 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Oct 14 13:17:44 crc kubenswrapper[4725]: E1014 13:17:44.245003 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jnxlj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6fsmm_openshift-marketplace(57b7ca76-c75e-44f5-b428-6233a92bca51): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 14 13:17:44 crc kubenswrapper[4725]: E1014 13:17:44.246489 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-6fsmm" podUID="57b7ca76-c75e-44f5-b428-6233a92bca51" Oct 14 13:17:46 crc kubenswrapper[4725]: E1014 13:17:46.100541 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Oct 14 13:17:46 crc kubenswrapper[4725]: E1014 13:17:46.101242 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5g5cs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-lj4wj_openshift-marketplace(70da4b97-6b5f-4aae-93d3-ef2593f042c0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 14 13:17:46 crc kubenswrapper[4725]: E1014 13:17:46.102550 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-lj4wj" podUID="70da4b97-6b5f-4aae-93d3-ef2593f042c0" Oct 14 13:17:47 crc kubenswrapper[4725]: I1014 13:17:47.589094 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-b9mz9" Oct 14 13:17:55 crc kubenswrapper[4725]: I1014 13:17:55.139123 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 14 13:18:00 crc kubenswrapper[4725]: E1014 13:18:00.633988 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Oct 14 13:18:00 crc kubenswrapper[4725]: E1014 13:18:00.634715 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6vvxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dw67l_openshift-marketplace(cc7e8275-cf1b-4d13-9ba5-95f65a961049): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 14 13:18:00 crc kubenswrapper[4725]: E1014 13:18:00.635894 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-dw67l" podUID="cc7e8275-cf1b-4d13-9ba5-95f65a961049" Oct 14 13:18:00 crc kubenswrapper[4725]: E1014 13:18:00.856664 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Oct 14 13:18:00 crc kubenswrapper[4725]: E1014 13:18:00.856879 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gbwz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-xs2wz_openshift-marketplace(a4706b0b-daed-451d-814b-bd2aa92c3c11): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 14 13:18:00 crc kubenswrapper[4725]: E1014 13:18:00.858167 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-xs2wz" podUID="a4706b0b-daed-451d-814b-bd2aa92c3c11" Oct 14 13:18:01 crc kubenswrapper[4725]: E1014 13:18:01.533235 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Oct 14 13:18:01 crc kubenswrapper[4725]: E1014 13:18:01.533516 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ds6lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-bkn7h_openshift-marketplace(04c32971-b976-46b3-b96c-2bcc703b4dd0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 14 13:18:01 crc kubenswrapper[4725]: E1014 13:18:01.534896 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-bkn7h" podUID="04c32971-b976-46b3-b96c-2bcc703b4dd0" Oct 14 13:18:02 crc kubenswrapper[4725]: E1014 13:18:02.360557 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Oct 14 13:18:02 crc kubenswrapper[4725]: E1014 13:18:02.360849 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ptcqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-g9pj7_openshift-marketplace(15dbec8a-0d83-438c-b58b-3eb5aafa1f95): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 14 13:18:02 crc kubenswrapper[4725]: E1014 13:18:02.363236 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-g9pj7" podUID="15dbec8a-0d83-438c-b58b-3eb5aafa1f95" Oct 14 13:18:02 crc kubenswrapper[4725]: I1014 13:18:02.520443 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 13:18:02 crc kubenswrapper[4725]: I1014 13:18:02.520561 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 13:18:02 crc kubenswrapper[4725]: E1014 13:18:02.827946 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bkn7h" podUID="04c32971-b976-46b3-b96c-2bcc703b4dd0" Oct 14 13:18:02 crc kubenswrapper[4725]: E1014 13:18:02.827950 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-dw67l" podUID="cc7e8275-cf1b-4d13-9ba5-95f65a961049" Oct 14 13:18:02 crc kubenswrapper[4725]: E1014 13:18:02.828198 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-xs2wz" podUID="a4706b0b-daed-451d-814b-bd2aa92c3c11" Oct 14 13:18:03 crc kubenswrapper[4725]: I1014 13:18:03.274597 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-cxcmw"] Oct 14 13:18:03 crc kubenswrapper[4725]: W1014 13:18:03.287854 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc23b90af_fa80_4af4_84cb_3fc9cf3f4c5b.slice/crio-bfe911e1d394f46c0a8cc807c514613eb5cf4128ab02b10d7e64de5d86b73274 WatchSource:0}: Error finding container bfe911e1d394f46c0a8cc807c514613eb5cf4128ab02b10d7e64de5d86b73274: Status 404 returned error can't find the container with id bfe911e1d394f46c0a8cc807c514613eb5cf4128ab02b10d7e64de5d86b73274 Oct 14 13:18:03 crc kubenswrapper[4725]: E1014 13:18:03.749801 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-g9pj7" podUID="15dbec8a-0d83-438c-b58b-3eb5aafa1f95" Oct 14 13:18:03 crc kubenswrapper[4725]: I1014 13:18:03.849068 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-cxcmw" event={"ID":"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b","Type":"ContainerStarted","Data":"bfe911e1d394f46c0a8cc807c514613eb5cf4128ab02b10d7e64de5d86b73274"} Oct 14 13:18:05 crc kubenswrapper[4725]: E1014 13:18:05.962423 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Oct 14 13:18:05 crc kubenswrapper[4725]: E1014 13:18:05.963069 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vcljq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-fwjbz_openshift-marketplace(b4c5e303-50a7-4c4f-835f-651e69aba358): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 14 13:18:05 crc kubenswrapper[4725]: E1014 13:18:05.964571 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-fwjbz" podUID="b4c5e303-50a7-4c4f-835f-651e69aba358" Oct 14 13:18:06 crc kubenswrapper[4725]: E1014 13:18:06.871959 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-fwjbz" podUID="b4c5e303-50a7-4c4f-835f-651e69aba358" Oct 14 13:18:07 crc kubenswrapper[4725]: I1014 13:18:07.877086 4725 generic.go:334] "Generic (PLEG): container finished" podID="72e152ab-fabb-4551-b0f9-7c520824edef" containerID="ee23a4719894549baa019926218fff42dbdc8e1ede7e7386f5358833de90d409" exitCode=0 Oct 14 13:18:07 crc kubenswrapper[4725]: I1014 13:18:07.877177 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nll7b" event={"ID":"72e152ab-fabb-4551-b0f9-7c520824edef","Type":"ContainerDied","Data":"ee23a4719894549baa019926218fff42dbdc8e1ede7e7386f5358833de90d409"} Oct 14 13:18:07 crc kubenswrapper[4725]: I1014 13:18:07.880661 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-cxcmw" event={"ID":"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b","Type":"ContainerStarted","Data":"b8ee29decb6eeab83097a567be81bd4646e748958676bf25e9d4705e2bbd6370"} Oct 14 13:18:08 crc kubenswrapper[4725]: I1014 13:18:08.888058 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-cxcmw" event={"ID":"c23b90af-fa80-4af4-84cb-3fc9cf3f4c5b","Type":"ContainerStarted","Data":"dfc943e316282cccde97a1cb0f6fcb6e6cb87ebb7dcc827386c5e158289e83ef"} Oct 14 13:18:08 crc kubenswrapper[4725]: I1014 13:18:08.917474 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-cxcmw" podStartSLOduration=183.917422291 podStartE2EDuration="3m3.917422291s" podCreationTimestamp="2025-10-14 13:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:18:08.905116123 +0000 UTC m=+205.753550982" watchObservedRunningTime="2025-10-14 13:18:08.917422291 +0000 UTC m=+205.765857110" Oct 14 13:18:10 crc kubenswrapper[4725]: I1014 13:18:10.902716 4725 generic.go:334] "Generic (PLEG): container finished" podID="57b7ca76-c75e-44f5-b428-6233a92bca51" containerID="cc8fa8fd2d6af923a8ddf93a8266612d5120b6d11e434b855dd743bb8e7bfd77" exitCode=0 Oct 14 13:18:10 crc kubenswrapper[4725]: I1014 13:18:10.902811 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fsmm" event={"ID":"57b7ca76-c75e-44f5-b428-6233a92bca51","Type":"ContainerDied","Data":"cc8fa8fd2d6af923a8ddf93a8266612d5120b6d11e434b855dd743bb8e7bfd77"} Oct 14 13:18:10 crc kubenswrapper[4725]: I1014 13:18:10.907470 4725 generic.go:334] "Generic (PLEG): container finished" podID="70da4b97-6b5f-4aae-93d3-ef2593f042c0" containerID="5bc3d3c3cdd6fe710b3b529e57918076dff872ab5e2c7fe051a09cfd8c4ee670" exitCode=0 Oct 14 13:18:10 crc kubenswrapper[4725]: I1014 13:18:10.907528 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lj4wj" event={"ID":"70da4b97-6b5f-4aae-93d3-ef2593f042c0","Type":"ContainerDied","Data":"5bc3d3c3cdd6fe710b3b529e57918076dff872ab5e2c7fe051a09cfd8c4ee670"} Oct 14 13:18:11 crc kubenswrapper[4725]: I1014 13:18:11.930202 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nll7b" event={"ID":"72e152ab-fabb-4551-b0f9-7c520824edef","Type":"ContainerStarted","Data":"c4aa4965666e6efb4672caa3904e2baea6c22dc8b12f1dd37b47f278e2163f0d"} Oct 14 13:18:11 crc kubenswrapper[4725]: I1014 13:18:11.981639 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nll7b" podStartSLOduration=3.48909187 podStartE2EDuration="54.981606327s" podCreationTimestamp="2025-10-14 13:17:17 +0000 UTC" firstStartedPulling="2025-10-14 13:17:19.281347088 +0000 UTC m=+156.129781897" lastFinishedPulling="2025-10-14 13:18:10.773861535 +0000 UTC m=+207.622296354" observedRunningTime="2025-10-14 13:18:11.973934446 +0000 UTC m=+208.822369255" watchObservedRunningTime="2025-10-14 13:18:11.981606327 +0000 UTC m=+208.830041136" Oct 14 13:18:12 crc kubenswrapper[4725]: I1014 13:18:12.931517 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fsmm" event={"ID":"57b7ca76-c75e-44f5-b428-6233a92bca51","Type":"ContainerStarted","Data":"a71ac9025aab4a47628a88fe0728c90233eb64f34ebfec97508968bb9fe50c1d"} Oct 14 13:18:12 crc kubenswrapper[4725]: I1014 13:18:12.933834 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lj4wj" event={"ID":"70da4b97-6b5f-4aae-93d3-ef2593f042c0","Type":"ContainerStarted","Data":"76afc92211dad8bd71472cd56e15c5f8bbae187315a6ba48ef8656ae82af661d"} Oct 14 13:18:12 crc kubenswrapper[4725]: I1014 13:18:12.950810 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6fsmm" podStartSLOduration=3.735308728 podStartE2EDuration="58.950774716s" podCreationTimestamp="2025-10-14 13:17:14 +0000 UTC" firstStartedPulling="2025-10-14 13:17:17.289707661 +0000 UTC m=+154.138142470" lastFinishedPulling="2025-10-14 13:18:12.505173649 +0000 UTC m=+209.353608458" observedRunningTime="2025-10-14 13:18:12.950219995 +0000 UTC m=+209.798654804" watchObservedRunningTime="2025-10-14 13:18:12.950774716 +0000 UTC m=+209.799209525" Oct 14 13:18:12 crc kubenswrapper[4725]: I1014 13:18:12.982971 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lj4wj" podStartSLOduration=4.455319543 podStartE2EDuration="57.98294555s" podCreationTimestamp="2025-10-14 13:17:15 +0000 UTC" firstStartedPulling="2025-10-14 13:17:18.301157319 +0000 UTC m=+155.149592128" lastFinishedPulling="2025-10-14 13:18:11.828783316 +0000 UTC m=+208.677218135" observedRunningTime="2025-10-14 13:18:12.979720188 +0000 UTC m=+209.828155027" watchObservedRunningTime="2025-10-14 13:18:12.98294555 +0000 UTC m=+209.831380359" Oct 14 13:18:15 crc kubenswrapper[4725]: I1014 13:18:15.312337 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6fsmm" Oct 14 13:18:15 crc kubenswrapper[4725]: I1014 13:18:15.312761 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6fsmm" Oct 14 13:18:15 crc kubenswrapper[4725]: I1014 13:18:15.825361 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lj4wj" Oct 14 13:18:15 crc kubenswrapper[4725]: I1014 13:18:15.825949 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lj4wj" Oct 14 13:18:15 crc kubenswrapper[4725]: I1014 13:18:15.981776 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dw67l" event={"ID":"cc7e8275-cf1b-4d13-9ba5-95f65a961049","Type":"ContainerStarted","Data":"39021970d56810f431fd99f12a7d91ba15da7f48aedd498cf595bde016de9e4b"} Oct 14 13:18:16 crc kubenswrapper[4725]: I1014 13:18:16.128814 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6fsmm" Oct 14 13:18:16 crc kubenswrapper[4725]: I1014 13:18:16.133129 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lj4wj" Oct 14 13:18:16 crc kubenswrapper[4725]: I1014 13:18:16.989903 4725 generic.go:334] "Generic (PLEG): container finished" podID="cc7e8275-cf1b-4d13-9ba5-95f65a961049" containerID="39021970d56810f431fd99f12a7d91ba15da7f48aedd498cf595bde016de9e4b" exitCode=0 Oct 14 13:18:16 crc kubenswrapper[4725]: I1014 13:18:16.990219 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dw67l" event={"ID":"cc7e8275-cf1b-4d13-9ba5-95f65a961049","Type":"ContainerDied","Data":"39021970d56810f431fd99f12a7d91ba15da7f48aedd498cf595bde016de9e4b"} Oct 14 13:18:17 crc kubenswrapper[4725]: I1014 13:18:17.933129 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nll7b" Oct 14 13:18:17 crc kubenswrapper[4725]: I1014 13:18:17.933708 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nll7b" Oct 14 13:18:17 crc kubenswrapper[4725]: I1014 13:18:17.990804 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nll7b" Oct 14 13:18:18 crc kubenswrapper[4725]: I1014 13:18:18.047257 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nll7b" Oct 14 13:18:21 crc kubenswrapper[4725]: I1014 13:18:21.017440 4725 generic.go:334] "Generic (PLEG): container finished" podID="15dbec8a-0d83-438c-b58b-3eb5aafa1f95" containerID="a513be71a6f537afe641d601cf112fc3c83babfb98153d0cb0bb9cd9d94a32c3" exitCode=0 Oct 14 13:18:21 crc kubenswrapper[4725]: I1014 13:18:21.017899 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g9pj7" event={"ID":"15dbec8a-0d83-438c-b58b-3eb5aafa1f95","Type":"ContainerDied","Data":"a513be71a6f537afe641d601cf112fc3c83babfb98153d0cb0bb9cd9d94a32c3"} Oct 14 13:18:21 crc kubenswrapper[4725]: I1014 13:18:21.022639 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dw67l" event={"ID":"cc7e8275-cf1b-4d13-9ba5-95f65a961049","Type":"ContainerStarted","Data":"9dec9d1b7a60e1db5de66e27051be6529a55113c3db3b72089f78e81ebd5208c"} Oct 14 13:18:21 crc kubenswrapper[4725]: I1014 13:18:21.024526 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xs2wz" event={"ID":"a4706b0b-daed-451d-814b-bd2aa92c3c11","Type":"ContainerStarted","Data":"6fbde0ed66796b424b4a06082cac426e6e12a4c9356667ab0650d7b4459458c4"} Oct 14 13:18:21 crc kubenswrapper[4725]: I1014 13:18:21.030773 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkn7h" event={"ID":"04c32971-b976-46b3-b96c-2bcc703b4dd0","Type":"ContainerStarted","Data":"b269f5371967d58c485d38fb1a8a50815237656ccde271b51861cfeb93c3483b"} Oct 14 13:18:21 crc kubenswrapper[4725]: I1014 13:18:21.068330 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dw67l" podStartSLOduration=3.535092179 podStartE2EDuration="1m3.06831231s" podCreationTimestamp="2025-10-14 13:17:18 +0000 UTC" firstStartedPulling="2025-10-14 13:17:20.349929195 +0000 UTC m=+157.198364004" lastFinishedPulling="2025-10-14 13:18:19.883149306 +0000 UTC m=+216.731584135" observedRunningTime="2025-10-14 13:18:21.067473588 +0000 UTC m=+217.915908417" watchObservedRunningTime="2025-10-14 13:18:21.06831231 +0000 UTC m=+217.916747109" Oct 14 13:18:22 crc kubenswrapper[4725]: I1014 13:18:22.043361 4725 generic.go:334] "Generic (PLEG): container finished" podID="a4706b0b-daed-451d-814b-bd2aa92c3c11" containerID="6fbde0ed66796b424b4a06082cac426e6e12a4c9356667ab0650d7b4459458c4" exitCode=0 Oct 14 13:18:22 crc kubenswrapper[4725]: I1014 13:18:22.043516 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xs2wz" event={"ID":"a4706b0b-daed-451d-814b-bd2aa92c3c11","Type":"ContainerDied","Data":"6fbde0ed66796b424b4a06082cac426e6e12a4c9356667ab0650d7b4459458c4"} Oct 14 13:18:22 crc kubenswrapper[4725]: I1014 13:18:22.046867 4725 generic.go:334] "Generic (PLEG): container finished" podID="04c32971-b976-46b3-b96c-2bcc703b4dd0" containerID="b269f5371967d58c485d38fb1a8a50815237656ccde271b51861cfeb93c3483b" exitCode=0 Oct 14 13:18:22 crc kubenswrapper[4725]: I1014 13:18:22.046921 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkn7h" event={"ID":"04c32971-b976-46b3-b96c-2bcc703b4dd0","Type":"ContainerDied","Data":"b269f5371967d58c485d38fb1a8a50815237656ccde271b51861cfeb93c3483b"} Oct 14 13:18:22 crc kubenswrapper[4725]: I1014 13:18:22.342322 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nll7b"] Oct 14 13:18:22 crc kubenswrapper[4725]: I1014 13:18:22.342697 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nll7b" podUID="72e152ab-fabb-4551-b0f9-7c520824edef" containerName="registry-server" containerID="cri-o://c4aa4965666e6efb4672caa3904e2baea6c22dc8b12f1dd37b47f278e2163f0d" gracePeriod=2 Oct 14 13:18:25 crc kubenswrapper[4725]: I1014 13:18:25.354888 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6fsmm" Oct 14 13:18:25 crc kubenswrapper[4725]: I1014 13:18:25.880071 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lj4wj" Oct 14 13:18:26 crc kubenswrapper[4725]: I1014 13:18:26.075927 4725 generic.go:334] "Generic (PLEG): container finished" podID="72e152ab-fabb-4551-b0f9-7c520824edef" containerID="c4aa4965666e6efb4672caa3904e2baea6c22dc8b12f1dd37b47f278e2163f0d" exitCode=0 Oct 14 13:18:26 crc kubenswrapper[4725]: I1014 13:18:26.076008 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nll7b" event={"ID":"72e152ab-fabb-4551-b0f9-7c520824edef","Type":"ContainerDied","Data":"c4aa4965666e6efb4672caa3904e2baea6c22dc8b12f1dd37b47f278e2163f0d"} Oct 14 13:18:27 crc kubenswrapper[4725]: I1014 13:18:27.540662 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lj4wj"] Oct 14 13:18:27 crc kubenswrapper[4725]: I1014 13:18:27.541343 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lj4wj" podUID="70da4b97-6b5f-4aae-93d3-ef2593f042c0" containerName="registry-server" containerID="cri-o://76afc92211dad8bd71472cd56e15c5f8bbae187315a6ba48ef8656ae82af661d" gracePeriod=2 Oct 14 13:18:27 crc kubenswrapper[4725]: E1014 13:18:27.933502 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c4aa4965666e6efb4672caa3904e2baea6c22dc8b12f1dd37b47f278e2163f0d is running failed: container process not found" containerID="c4aa4965666e6efb4672caa3904e2baea6c22dc8b12f1dd37b47f278e2163f0d" cmd=["grpc_health_probe","-addr=:50051"] Oct 14 13:18:27 crc kubenswrapper[4725]: E1014 13:18:27.933950 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c4aa4965666e6efb4672caa3904e2baea6c22dc8b12f1dd37b47f278e2163f0d is running failed: container process not found" containerID="c4aa4965666e6efb4672caa3904e2baea6c22dc8b12f1dd37b47f278e2163f0d" cmd=["grpc_health_probe","-addr=:50051"] Oct 14 13:18:27 crc kubenswrapper[4725]: E1014 13:18:27.934438 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c4aa4965666e6efb4672caa3904e2baea6c22dc8b12f1dd37b47f278e2163f0d is running failed: container process not found" containerID="c4aa4965666e6efb4672caa3904e2baea6c22dc8b12f1dd37b47f278e2163f0d" cmd=["grpc_health_probe","-addr=:50051"] Oct 14 13:18:27 crc kubenswrapper[4725]: E1014 13:18:27.934550 4725 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c4aa4965666e6efb4672caa3904e2baea6c22dc8b12f1dd37b47f278e2163f0d is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-nll7b" podUID="72e152ab-fabb-4551-b0f9-7c520824edef" containerName="registry-server" Oct 14 13:18:28 crc kubenswrapper[4725]: I1014 13:18:28.735589 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nll7b" Oct 14 13:18:28 crc kubenswrapper[4725]: I1014 13:18:28.776784 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dw67l" Oct 14 13:18:28 crc kubenswrapper[4725]: I1014 13:18:28.777035 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dw67l" Oct 14 13:18:28 crc kubenswrapper[4725]: I1014 13:18:28.799596 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qhmq\" (UniqueName: \"kubernetes.io/projected/72e152ab-fabb-4551-b0f9-7c520824edef-kube-api-access-6qhmq\") pod \"72e152ab-fabb-4551-b0f9-7c520824edef\" (UID: \"72e152ab-fabb-4551-b0f9-7c520824edef\") " Oct 14 13:18:28 crc kubenswrapper[4725]: I1014 13:18:28.799819 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72e152ab-fabb-4551-b0f9-7c520824edef-catalog-content\") pod \"72e152ab-fabb-4551-b0f9-7c520824edef\" (UID: \"72e152ab-fabb-4551-b0f9-7c520824edef\") " Oct 14 13:18:28 crc kubenswrapper[4725]: I1014 13:18:28.799878 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72e152ab-fabb-4551-b0f9-7c520824edef-utilities\") pod \"72e152ab-fabb-4551-b0f9-7c520824edef\" (UID: \"72e152ab-fabb-4551-b0f9-7c520824edef\") " Oct 14 13:18:28 crc kubenswrapper[4725]: I1014 13:18:28.800980 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72e152ab-fabb-4551-b0f9-7c520824edef-utilities" (OuterVolumeSpecName: "utilities") pod "72e152ab-fabb-4551-b0f9-7c520824edef" (UID: "72e152ab-fabb-4551-b0f9-7c520824edef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:18:28 crc kubenswrapper[4725]: I1014 13:18:28.808219 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72e152ab-fabb-4551-b0f9-7c520824edef-kube-api-access-6qhmq" (OuterVolumeSpecName: "kube-api-access-6qhmq") pod "72e152ab-fabb-4551-b0f9-7c520824edef" (UID: "72e152ab-fabb-4551-b0f9-7c520824edef"). InnerVolumeSpecName "kube-api-access-6qhmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:18:28 crc kubenswrapper[4725]: I1014 13:18:28.816172 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dw67l" Oct 14 13:18:28 crc kubenswrapper[4725]: I1014 13:18:28.818553 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72e152ab-fabb-4551-b0f9-7c520824edef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "72e152ab-fabb-4551-b0f9-7c520824edef" (UID: "72e152ab-fabb-4551-b0f9-7c520824edef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:18:28 crc kubenswrapper[4725]: I1014 13:18:28.901249 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72e152ab-fabb-4551-b0f9-7c520824edef-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 13:18:28 crc kubenswrapper[4725]: I1014 13:18:28.901297 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72e152ab-fabb-4551-b0f9-7c520824edef-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 13:18:28 crc kubenswrapper[4725]: I1014 13:18:28.901310 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qhmq\" (UniqueName: \"kubernetes.io/projected/72e152ab-fabb-4551-b0f9-7c520824edef-kube-api-access-6qhmq\") on node \"crc\" DevicePath \"\"" Oct 14 13:18:29 crc kubenswrapper[4725]: I1014 13:18:29.101166 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nll7b" event={"ID":"72e152ab-fabb-4551-b0f9-7c520824edef","Type":"ContainerDied","Data":"fdfc38ec0c6ed300772e83e962cf6cad97af68369fd63b021dbaa38d6c5ad12a"} Oct 14 13:18:29 crc kubenswrapper[4725]: I1014 13:18:29.101436 4725 scope.go:117] "RemoveContainer" containerID="c4aa4965666e6efb4672caa3904e2baea6c22dc8b12f1dd37b47f278e2163f0d" Oct 14 13:18:29 crc kubenswrapper[4725]: I1014 13:18:29.101630 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nll7b" Oct 14 13:18:29 crc kubenswrapper[4725]: I1014 13:18:29.142621 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nll7b"] Oct 14 13:18:29 crc kubenswrapper[4725]: I1014 13:18:29.146371 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nll7b"] Oct 14 13:18:29 crc kubenswrapper[4725]: I1014 13:18:29.155912 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dw67l" Oct 14 13:18:29 crc kubenswrapper[4725]: I1014 13:18:29.927541 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72e152ab-fabb-4551-b0f9-7c520824edef" path="/var/lib/kubelet/pods/72e152ab-fabb-4551-b0f9-7c520824edef/volumes" Oct 14 13:18:31 crc kubenswrapper[4725]: I1014 13:18:31.120126 4725 generic.go:334] "Generic (PLEG): container finished" podID="70da4b97-6b5f-4aae-93d3-ef2593f042c0" containerID="76afc92211dad8bd71472cd56e15c5f8bbae187315a6ba48ef8656ae82af661d" exitCode=0 Oct 14 13:18:31 crc kubenswrapper[4725]: I1014 13:18:31.120199 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lj4wj" event={"ID":"70da4b97-6b5f-4aae-93d3-ef2593f042c0","Type":"ContainerDied","Data":"76afc92211dad8bd71472cd56e15c5f8bbae187315a6ba48ef8656ae82af661d"} Oct 14 13:18:32 crc kubenswrapper[4725]: I1014 13:18:32.520559 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 13:18:32 crc kubenswrapper[4725]: I1014 13:18:32.520675 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 13:18:32 crc kubenswrapper[4725]: I1014 13:18:32.520764 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" Oct 14 13:18:32 crc kubenswrapper[4725]: I1014 13:18:32.522009 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c"} pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 13:18:32 crc kubenswrapper[4725]: I1014 13:18:32.522278 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" containerID="cri-o://6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c" gracePeriod=600 Oct 14 13:18:33 crc kubenswrapper[4725]: I1014 13:18:33.141788 4725 generic.go:334] "Generic (PLEG): container finished" podID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerID="6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c" exitCode=0 Oct 14 13:18:33 crc kubenswrapper[4725]: I1014 13:18:33.141864 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" event={"ID":"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c","Type":"ContainerDied","Data":"6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c"} Oct 14 13:18:34 crc kubenswrapper[4725]: I1014 13:18:34.872380 4725 scope.go:117] "RemoveContainer" containerID="ee23a4719894549baa019926218fff42dbdc8e1ede7e7386f5358833de90d409" Oct 14 13:18:35 crc kubenswrapper[4725]: E1014 13:18:35.826715 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 76afc92211dad8bd71472cd56e15c5f8bbae187315a6ba48ef8656ae82af661d is running failed: container process not found" containerID="76afc92211dad8bd71472cd56e15c5f8bbae187315a6ba48ef8656ae82af661d" cmd=["grpc_health_probe","-addr=:50051"] Oct 14 13:18:35 crc kubenswrapper[4725]: E1014 13:18:35.827862 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 76afc92211dad8bd71472cd56e15c5f8bbae187315a6ba48ef8656ae82af661d is running failed: container process not found" containerID="76afc92211dad8bd71472cd56e15c5f8bbae187315a6ba48ef8656ae82af661d" cmd=["grpc_health_probe","-addr=:50051"] Oct 14 13:18:35 crc kubenswrapper[4725]: E1014 13:18:35.828315 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 76afc92211dad8bd71472cd56e15c5f8bbae187315a6ba48ef8656ae82af661d is running failed: container process not found" containerID="76afc92211dad8bd71472cd56e15c5f8bbae187315a6ba48ef8656ae82af661d" cmd=["grpc_health_probe","-addr=:50051"] Oct 14 13:18:35 crc kubenswrapper[4725]: E1014 13:18:35.828437 4725 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 76afc92211dad8bd71472cd56e15c5f8bbae187315a6ba48ef8656ae82af661d is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-lj4wj" podUID="70da4b97-6b5f-4aae-93d3-ef2593f042c0" containerName="registry-server" Oct 14 13:18:38 crc kubenswrapper[4725]: I1014 13:18:38.697482 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lj4wj" Oct 14 13:18:38 crc kubenswrapper[4725]: I1014 13:18:38.786052 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70da4b97-6b5f-4aae-93d3-ef2593f042c0-catalog-content\") pod \"70da4b97-6b5f-4aae-93d3-ef2593f042c0\" (UID: \"70da4b97-6b5f-4aae-93d3-ef2593f042c0\") " Oct 14 13:18:38 crc kubenswrapper[4725]: I1014 13:18:38.786280 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5g5cs\" (UniqueName: \"kubernetes.io/projected/70da4b97-6b5f-4aae-93d3-ef2593f042c0-kube-api-access-5g5cs\") pod \"70da4b97-6b5f-4aae-93d3-ef2593f042c0\" (UID: \"70da4b97-6b5f-4aae-93d3-ef2593f042c0\") " Oct 14 13:18:38 crc kubenswrapper[4725]: I1014 13:18:38.786337 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70da4b97-6b5f-4aae-93d3-ef2593f042c0-utilities\") pod \"70da4b97-6b5f-4aae-93d3-ef2593f042c0\" (UID: \"70da4b97-6b5f-4aae-93d3-ef2593f042c0\") " Oct 14 13:18:38 crc kubenswrapper[4725]: I1014 13:18:38.787876 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70da4b97-6b5f-4aae-93d3-ef2593f042c0-utilities" (OuterVolumeSpecName: "utilities") pod "70da4b97-6b5f-4aae-93d3-ef2593f042c0" (UID: "70da4b97-6b5f-4aae-93d3-ef2593f042c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:18:38 crc kubenswrapper[4725]: I1014 13:18:38.795328 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70da4b97-6b5f-4aae-93d3-ef2593f042c0-kube-api-access-5g5cs" (OuterVolumeSpecName: "kube-api-access-5g5cs") pod "70da4b97-6b5f-4aae-93d3-ef2593f042c0" (UID: "70da4b97-6b5f-4aae-93d3-ef2593f042c0"). InnerVolumeSpecName "kube-api-access-5g5cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:18:38 crc kubenswrapper[4725]: I1014 13:18:38.836019 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70da4b97-6b5f-4aae-93d3-ef2593f042c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70da4b97-6b5f-4aae-93d3-ef2593f042c0" (UID: "70da4b97-6b5f-4aae-93d3-ef2593f042c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:18:38 crc kubenswrapper[4725]: I1014 13:18:38.888165 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5g5cs\" (UniqueName: \"kubernetes.io/projected/70da4b97-6b5f-4aae-93d3-ef2593f042c0-kube-api-access-5g5cs\") on node \"crc\" DevicePath \"\"" Oct 14 13:18:38 crc kubenswrapper[4725]: I1014 13:18:38.888222 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70da4b97-6b5f-4aae-93d3-ef2593f042c0-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 13:18:38 crc kubenswrapper[4725]: I1014 13:18:38.888306 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70da4b97-6b5f-4aae-93d3-ef2593f042c0-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 13:18:39 crc kubenswrapper[4725]: I1014 13:18:39.191332 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lj4wj" event={"ID":"70da4b97-6b5f-4aae-93d3-ef2593f042c0","Type":"ContainerDied","Data":"e5b3c346467ed22b994f4505b2b1a8b3f92b0105ea9b0ab9089e7d2862aab638"} Oct 14 13:18:39 crc kubenswrapper[4725]: I1014 13:18:39.191442 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lj4wj" Oct 14 13:18:39 crc kubenswrapper[4725]: I1014 13:18:39.294060 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lj4wj"] Oct 14 13:18:39 crc kubenswrapper[4725]: I1014 13:18:39.300289 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lj4wj"] Oct 14 13:18:39 crc kubenswrapper[4725]: I1014 13:18:39.403568 4725 scope.go:117] "RemoveContainer" containerID="d0cc1ad43b229aca9ad385df1dc473c715282ad188dded83376fd354ce0f50e6" Oct 14 13:18:39 crc kubenswrapper[4725]: I1014 13:18:39.738036 4725 scope.go:117] "RemoveContainer" containerID="76afc92211dad8bd71472cd56e15c5f8bbae187315a6ba48ef8656ae82af661d" Oct 14 13:18:39 crc kubenswrapper[4725]: I1014 13:18:39.802160 4725 scope.go:117] "RemoveContainer" containerID="5bc3d3c3cdd6fe710b3b529e57918076dff872ab5e2c7fe051a09cfd8c4ee670" Oct 14 13:18:39 crc kubenswrapper[4725]: I1014 13:18:39.845964 4725 scope.go:117] "RemoveContainer" containerID="c2188edf3766945e5953b903299bbc7ee9ac4a6d7154e83f86dc1d375ee146f5" Oct 14 13:18:39 crc kubenswrapper[4725]: I1014 13:18:39.926556 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70da4b97-6b5f-4aae-93d3-ef2593f042c0" path="/var/lib/kubelet/pods/70da4b97-6b5f-4aae-93d3-ef2593f042c0/volumes" Oct 14 13:18:40 crc kubenswrapper[4725]: I1014 13:18:40.200188 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwjbz" event={"ID":"b4c5e303-50a7-4c4f-835f-651e69aba358","Type":"ContainerStarted","Data":"d503808146882bcdf595f2f9627552a2bce5df141c5ee1b051b7bdc1365f1a66"} Oct 14 13:18:40 crc kubenswrapper[4725]: I1014 13:18:40.204439 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkn7h" event={"ID":"04c32971-b976-46b3-b96c-2bcc703b4dd0","Type":"ContainerStarted","Data":"814ca7293ee103234b9c6e9fea78673eb0fefabfa642e92aa634f93ba5552a64"} Oct 14 13:18:40 crc kubenswrapper[4725]: I1014 13:18:40.206890 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g9pj7" event={"ID":"15dbec8a-0d83-438c-b58b-3eb5aafa1f95","Type":"ContainerStarted","Data":"73db548c4f761d59f43893cbe7d37266aecb18d9a07f0fa8659bc545e98be789"} Oct 14 13:18:40 crc kubenswrapper[4725]: I1014 13:18:40.209708 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" event={"ID":"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c","Type":"ContainerStarted","Data":"b08205f9a4a675b919bf3d7c8faa16184d5d3774c47efd0ec1554cceaf138624"} Oct 14 13:18:40 crc kubenswrapper[4725]: I1014 13:18:40.212676 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xs2wz" event={"ID":"a4706b0b-daed-451d-814b-bd2aa92c3c11","Type":"ContainerStarted","Data":"9903906ec65b1fbc138954f244c30e1cf5b3c5137337f6d6ac77fd91a1f15c61"} Oct 14 13:18:40 crc kubenswrapper[4725]: I1014 13:18:40.242146 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xs2wz" podStartSLOduration=4.259433702 podStartE2EDuration="1m25.242121081s" podCreationTimestamp="2025-10-14 13:17:15 +0000 UTC" firstStartedPulling="2025-10-14 13:17:18.301312583 +0000 UTC m=+155.149747392" lastFinishedPulling="2025-10-14 13:18:39.283999952 +0000 UTC m=+236.132434771" observedRunningTime="2025-10-14 13:18:40.240718088 +0000 UTC m=+237.089152927" watchObservedRunningTime="2025-10-14 13:18:40.242121081 +0000 UTC m=+237.090555890" Oct 14 13:18:41 crc kubenswrapper[4725]: I1014 13:18:41.221357 4725 generic.go:334] "Generic (PLEG): container finished" podID="b4c5e303-50a7-4c4f-835f-651e69aba358" containerID="d503808146882bcdf595f2f9627552a2bce5df141c5ee1b051b7bdc1365f1a66" exitCode=0 Oct 14 13:18:41 crc kubenswrapper[4725]: I1014 13:18:41.221435 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwjbz" event={"ID":"b4c5e303-50a7-4c4f-835f-651e69aba358","Type":"ContainerDied","Data":"d503808146882bcdf595f2f9627552a2bce5df141c5ee1b051b7bdc1365f1a66"} Oct 14 13:18:41 crc kubenswrapper[4725]: I1014 13:18:41.253529 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bkn7h" podStartSLOduration=5.371528223 podStartE2EDuration="1m23.253506025s" podCreationTimestamp="2025-10-14 13:17:18 +0000 UTC" firstStartedPulling="2025-10-14 13:17:21.402441905 +0000 UTC m=+158.250876704" lastFinishedPulling="2025-10-14 13:18:39.284419657 +0000 UTC m=+236.132854506" observedRunningTime="2025-10-14 13:18:41.251854323 +0000 UTC m=+238.100289132" watchObservedRunningTime="2025-10-14 13:18:41.253506025 +0000 UTC m=+238.101940844" Oct 14 13:18:41 crc kubenswrapper[4725]: I1014 13:18:41.278731 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g9pj7" podStartSLOduration=4.176355433 podStartE2EDuration="1m25.278704253s" podCreationTimestamp="2025-10-14 13:17:16 +0000 UTC" firstStartedPulling="2025-10-14 13:17:18.301432357 +0000 UTC m=+155.149867166" lastFinishedPulling="2025-10-14 13:18:39.403781177 +0000 UTC m=+236.252215986" observedRunningTime="2025-10-14 13:18:41.273854309 +0000 UTC m=+238.122289128" watchObservedRunningTime="2025-10-14 13:18:41.278704253 +0000 UTC m=+238.127139062" Oct 14 13:18:42 crc kubenswrapper[4725]: I1014 13:18:42.229879 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwjbz" event={"ID":"b4c5e303-50a7-4c4f-835f-651e69aba358","Type":"ContainerStarted","Data":"a9cc8e0b1bc6e0143fd5ec2dc0ede7fd500a35cf69950d4fc987cd58672f96dd"} Oct 14 13:18:42 crc kubenswrapper[4725]: I1014 13:18:42.252764 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fwjbz" podStartSLOduration=3.878674657 podStartE2EDuration="1m27.252736748s" podCreationTimestamp="2025-10-14 13:17:15 +0000 UTC" firstStartedPulling="2025-10-14 13:17:18.300795959 +0000 UTC m=+155.149230768" lastFinishedPulling="2025-10-14 13:18:41.67485805 +0000 UTC m=+238.523292859" observedRunningTime="2025-10-14 13:18:42.247307821 +0000 UTC m=+239.095742650" watchObservedRunningTime="2025-10-14 13:18:42.252736748 +0000 UTC m=+239.101171567" Oct 14 13:18:45 crc kubenswrapper[4725]: I1014 13:18:45.524265 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fwjbz" Oct 14 13:18:45 crc kubenswrapper[4725]: I1014 13:18:45.525152 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fwjbz" Oct 14 13:18:45 crc kubenswrapper[4725]: I1014 13:18:45.597043 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fwjbz" Oct 14 13:18:45 crc kubenswrapper[4725]: I1014 13:18:45.933883 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xs2wz" Oct 14 13:18:45 crc kubenswrapper[4725]: I1014 13:18:45.934264 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xs2wz" Oct 14 13:18:45 crc kubenswrapper[4725]: I1014 13:18:45.970689 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xs2wz" Oct 14 13:18:46 crc kubenswrapper[4725]: I1014 13:18:46.292013 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xs2wz" Oct 14 13:18:46 crc kubenswrapper[4725]: I1014 13:18:46.301783 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fwjbz" Oct 14 13:18:47 crc kubenswrapper[4725]: I1014 13:18:47.332502 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g9pj7" Oct 14 13:18:47 crc kubenswrapper[4725]: I1014 13:18:47.333085 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g9pj7" Oct 14 13:18:47 crc kubenswrapper[4725]: I1014 13:18:47.370309 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g9pj7" Oct 14 13:18:48 crc kubenswrapper[4725]: I1014 13:18:48.307802 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g9pj7" Oct 14 13:18:48 crc kubenswrapper[4725]: I1014 13:18:48.348897 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xs2wz"] Oct 14 13:18:48 crc kubenswrapper[4725]: I1014 13:18:48.349111 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xs2wz" podUID="a4706b0b-daed-451d-814b-bd2aa92c3c11" containerName="registry-server" containerID="cri-o://9903906ec65b1fbc138954f244c30e1cf5b3c5137337f6d6ac77fd91a1f15c61" gracePeriod=2 Oct 14 13:18:48 crc kubenswrapper[4725]: I1014 13:18:48.753485 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xs2wz" Oct 14 13:18:48 crc kubenswrapper[4725]: I1014 13:18:48.837291 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbwz9\" (UniqueName: \"kubernetes.io/projected/a4706b0b-daed-451d-814b-bd2aa92c3c11-kube-api-access-gbwz9\") pod \"a4706b0b-daed-451d-814b-bd2aa92c3c11\" (UID: \"a4706b0b-daed-451d-814b-bd2aa92c3c11\") " Oct 14 13:18:48 crc kubenswrapper[4725]: I1014 13:18:48.837785 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4706b0b-daed-451d-814b-bd2aa92c3c11-utilities\") pod \"a4706b0b-daed-451d-814b-bd2aa92c3c11\" (UID: \"a4706b0b-daed-451d-814b-bd2aa92c3c11\") " Oct 14 13:18:48 crc kubenswrapper[4725]: I1014 13:18:48.837852 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4706b0b-daed-451d-814b-bd2aa92c3c11-catalog-content\") pod \"a4706b0b-daed-451d-814b-bd2aa92c3c11\" (UID: \"a4706b0b-daed-451d-814b-bd2aa92c3c11\") " Oct 14 13:18:48 crc kubenswrapper[4725]: I1014 13:18:48.839201 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4706b0b-daed-451d-814b-bd2aa92c3c11-utilities" (OuterVolumeSpecName: "utilities") pod "a4706b0b-daed-451d-814b-bd2aa92c3c11" (UID: "a4706b0b-daed-451d-814b-bd2aa92c3c11"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:18:48 crc kubenswrapper[4725]: I1014 13:18:48.849532 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4706b0b-daed-451d-814b-bd2aa92c3c11-kube-api-access-gbwz9" (OuterVolumeSpecName: "kube-api-access-gbwz9") pod "a4706b0b-daed-451d-814b-bd2aa92c3c11" (UID: "a4706b0b-daed-451d-814b-bd2aa92c3c11"). InnerVolumeSpecName "kube-api-access-gbwz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:18:48 crc kubenswrapper[4725]: I1014 13:18:48.892514 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4706b0b-daed-451d-814b-bd2aa92c3c11-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4706b0b-daed-451d-814b-bd2aa92c3c11" (UID: "a4706b0b-daed-451d-814b-bd2aa92c3c11"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:18:48 crc kubenswrapper[4725]: I1014 13:18:48.938996 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4706b0b-daed-451d-814b-bd2aa92c3c11-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 13:18:48 crc kubenswrapper[4725]: I1014 13:18:48.939041 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4706b0b-daed-451d-814b-bd2aa92c3c11-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 13:18:48 crc kubenswrapper[4725]: I1014 13:18:48.939056 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbwz9\" (UniqueName: \"kubernetes.io/projected/a4706b0b-daed-451d-814b-bd2aa92c3c11-kube-api-access-gbwz9\") on node \"crc\" DevicePath \"\"" Oct 14 13:18:49 crc kubenswrapper[4725]: I1014 13:18:49.270862 4725 generic.go:334] "Generic (PLEG): container finished" podID="a4706b0b-daed-451d-814b-bd2aa92c3c11" containerID="9903906ec65b1fbc138954f244c30e1cf5b3c5137337f6d6ac77fd91a1f15c61" exitCode=0 Oct 14 13:18:49 crc kubenswrapper[4725]: I1014 13:18:49.270968 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xs2wz" event={"ID":"a4706b0b-daed-451d-814b-bd2aa92c3c11","Type":"ContainerDied","Data":"9903906ec65b1fbc138954f244c30e1cf5b3c5137337f6d6ac77fd91a1f15c61"} Oct 14 13:18:49 crc kubenswrapper[4725]: I1014 13:18:49.271031 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xs2wz" event={"ID":"a4706b0b-daed-451d-814b-bd2aa92c3c11","Type":"ContainerDied","Data":"3f6ac7b5f8478762cecf6319b5bc0825a6e13741a67a8460bde76526fad402c5"} Oct 14 13:18:49 crc kubenswrapper[4725]: I1014 13:18:49.271033 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xs2wz" Oct 14 13:18:49 crc kubenswrapper[4725]: I1014 13:18:49.271067 4725 scope.go:117] "RemoveContainer" containerID="9903906ec65b1fbc138954f244c30e1cf5b3c5137337f6d6ac77fd91a1f15c61" Oct 14 13:18:49 crc kubenswrapper[4725]: I1014 13:18:49.302861 4725 scope.go:117] "RemoveContainer" containerID="6fbde0ed66796b424b4a06082cac426e6e12a4c9356667ab0650d7b4459458c4" Oct 14 13:18:49 crc kubenswrapper[4725]: I1014 13:18:49.307877 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xs2wz"] Oct 14 13:18:49 crc kubenswrapper[4725]: I1014 13:18:49.311575 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xs2wz"] Oct 14 13:18:49 crc kubenswrapper[4725]: I1014 13:18:49.333444 4725 scope.go:117] "RemoveContainer" containerID="7e7b30e3cb5671a7cf31e88b4d412f4eb42019ab18c64e8a08ee7672f1e6c553" Oct 14 13:18:49 crc kubenswrapper[4725]: I1014 13:18:49.350190 4725 scope.go:117] "RemoveContainer" containerID="9903906ec65b1fbc138954f244c30e1cf5b3c5137337f6d6ac77fd91a1f15c61" Oct 14 13:18:49 crc kubenswrapper[4725]: E1014 13:18:49.351720 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9903906ec65b1fbc138954f244c30e1cf5b3c5137337f6d6ac77fd91a1f15c61\": container with ID starting with 9903906ec65b1fbc138954f244c30e1cf5b3c5137337f6d6ac77fd91a1f15c61 not found: ID does not exist" containerID="9903906ec65b1fbc138954f244c30e1cf5b3c5137337f6d6ac77fd91a1f15c61" Oct 14 13:18:49 crc kubenswrapper[4725]: I1014 13:18:49.351789 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9903906ec65b1fbc138954f244c30e1cf5b3c5137337f6d6ac77fd91a1f15c61"} err="failed to get container status \"9903906ec65b1fbc138954f244c30e1cf5b3c5137337f6d6ac77fd91a1f15c61\": rpc error: code = NotFound desc = could not find container \"9903906ec65b1fbc138954f244c30e1cf5b3c5137337f6d6ac77fd91a1f15c61\": container with ID starting with 9903906ec65b1fbc138954f244c30e1cf5b3c5137337f6d6ac77fd91a1f15c61 not found: ID does not exist" Oct 14 13:18:49 crc kubenswrapper[4725]: I1014 13:18:49.351843 4725 scope.go:117] "RemoveContainer" containerID="6fbde0ed66796b424b4a06082cac426e6e12a4c9356667ab0650d7b4459458c4" Oct 14 13:18:49 crc kubenswrapper[4725]: E1014 13:18:49.352808 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fbde0ed66796b424b4a06082cac426e6e12a4c9356667ab0650d7b4459458c4\": container with ID starting with 6fbde0ed66796b424b4a06082cac426e6e12a4c9356667ab0650d7b4459458c4 not found: ID does not exist" containerID="6fbde0ed66796b424b4a06082cac426e6e12a4c9356667ab0650d7b4459458c4" Oct 14 13:18:49 crc kubenswrapper[4725]: I1014 13:18:49.352891 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fbde0ed66796b424b4a06082cac426e6e12a4c9356667ab0650d7b4459458c4"} err="failed to get container status \"6fbde0ed66796b424b4a06082cac426e6e12a4c9356667ab0650d7b4459458c4\": rpc error: code = NotFound desc = could not find container \"6fbde0ed66796b424b4a06082cac426e6e12a4c9356667ab0650d7b4459458c4\": container with ID starting with 6fbde0ed66796b424b4a06082cac426e6e12a4c9356667ab0650d7b4459458c4 not found: ID does not exist" Oct 14 13:18:49 crc kubenswrapper[4725]: I1014 13:18:49.352916 4725 scope.go:117] "RemoveContainer" containerID="7e7b30e3cb5671a7cf31e88b4d412f4eb42019ab18c64e8a08ee7672f1e6c553" Oct 14 13:18:49 crc kubenswrapper[4725]: E1014 13:18:49.353406 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e7b30e3cb5671a7cf31e88b4d412f4eb42019ab18c64e8a08ee7672f1e6c553\": container with ID starting with 7e7b30e3cb5671a7cf31e88b4d412f4eb42019ab18c64e8a08ee7672f1e6c553 not found: ID does not exist" containerID="7e7b30e3cb5671a7cf31e88b4d412f4eb42019ab18c64e8a08ee7672f1e6c553" Oct 14 13:18:49 crc kubenswrapper[4725]: I1014 13:18:49.353529 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e7b30e3cb5671a7cf31e88b4d412f4eb42019ab18c64e8a08ee7672f1e6c553"} err="failed to get container status \"7e7b30e3cb5671a7cf31e88b4d412f4eb42019ab18c64e8a08ee7672f1e6c553\": rpc error: code = NotFound desc = could not find container \"7e7b30e3cb5671a7cf31e88b4d412f4eb42019ab18c64e8a08ee7672f1e6c553\": container with ID starting with 7e7b30e3cb5671a7cf31e88b4d412f4eb42019ab18c64e8a08ee7672f1e6c553 not found: ID does not exist" Oct 14 13:18:49 crc kubenswrapper[4725]: I1014 13:18:49.368480 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bkn7h" Oct 14 13:18:49 crc kubenswrapper[4725]: I1014 13:18:49.368621 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bkn7h" Oct 14 13:18:49 crc kubenswrapper[4725]: I1014 13:18:49.420350 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bkn7h" Oct 14 13:18:49 crc kubenswrapper[4725]: I1014 13:18:49.931342 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4706b0b-daed-451d-814b-bd2aa92c3c11" path="/var/lib/kubelet/pods/a4706b0b-daed-451d-814b-bd2aa92c3c11/volumes" Oct 14 13:18:50 crc kubenswrapper[4725]: I1014 13:18:50.329830 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bkn7h" Oct 14 13:18:52 crc kubenswrapper[4725]: I1014 13:18:52.741420 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bkn7h"] Oct 14 13:18:52 crc kubenswrapper[4725]: I1014 13:18:52.742187 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bkn7h" podUID="04c32971-b976-46b3-b96c-2bcc703b4dd0" containerName="registry-server" containerID="cri-o://814ca7293ee103234b9c6e9fea78673eb0fefabfa642e92aa634f93ba5552a64" gracePeriod=2 Oct 14 13:18:53 crc kubenswrapper[4725]: I1014 13:18:53.117848 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bkn7h" Oct 14 13:18:53 crc kubenswrapper[4725]: I1014 13:18:53.208607 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04c32971-b976-46b3-b96c-2bcc703b4dd0-utilities\") pod \"04c32971-b976-46b3-b96c-2bcc703b4dd0\" (UID: \"04c32971-b976-46b3-b96c-2bcc703b4dd0\") " Oct 14 13:18:53 crc kubenswrapper[4725]: I1014 13:18:53.208760 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds6lb\" (UniqueName: \"kubernetes.io/projected/04c32971-b976-46b3-b96c-2bcc703b4dd0-kube-api-access-ds6lb\") pod \"04c32971-b976-46b3-b96c-2bcc703b4dd0\" (UID: \"04c32971-b976-46b3-b96c-2bcc703b4dd0\") " Oct 14 13:18:53 crc kubenswrapper[4725]: I1014 13:18:53.208791 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04c32971-b976-46b3-b96c-2bcc703b4dd0-catalog-content\") pod \"04c32971-b976-46b3-b96c-2bcc703b4dd0\" (UID: \"04c32971-b976-46b3-b96c-2bcc703b4dd0\") " Oct 14 13:18:53 crc kubenswrapper[4725]: I1014 13:18:53.210500 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04c32971-b976-46b3-b96c-2bcc703b4dd0-utilities" (OuterVolumeSpecName: "utilities") pod "04c32971-b976-46b3-b96c-2bcc703b4dd0" (UID: "04c32971-b976-46b3-b96c-2bcc703b4dd0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:18:53 crc kubenswrapper[4725]: I1014 13:18:53.211756 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04c32971-b976-46b3-b96c-2bcc703b4dd0-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 13:18:53 crc kubenswrapper[4725]: I1014 13:18:53.218643 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04c32971-b976-46b3-b96c-2bcc703b4dd0-kube-api-access-ds6lb" (OuterVolumeSpecName: "kube-api-access-ds6lb") pod "04c32971-b976-46b3-b96c-2bcc703b4dd0" (UID: "04c32971-b976-46b3-b96c-2bcc703b4dd0"). InnerVolumeSpecName "kube-api-access-ds6lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:18:53 crc kubenswrapper[4725]: I1014 13:18:53.296063 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04c32971-b976-46b3-b96c-2bcc703b4dd0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04c32971-b976-46b3-b96c-2bcc703b4dd0" (UID: "04c32971-b976-46b3-b96c-2bcc703b4dd0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:18:53 crc kubenswrapper[4725]: I1014 13:18:53.298248 4725 generic.go:334] "Generic (PLEG): container finished" podID="04c32971-b976-46b3-b96c-2bcc703b4dd0" containerID="814ca7293ee103234b9c6e9fea78673eb0fefabfa642e92aa634f93ba5552a64" exitCode=0 Oct 14 13:18:53 crc kubenswrapper[4725]: I1014 13:18:53.298372 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bkn7h" Oct 14 13:18:53 crc kubenswrapper[4725]: I1014 13:18:53.298344 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkn7h" event={"ID":"04c32971-b976-46b3-b96c-2bcc703b4dd0","Type":"ContainerDied","Data":"814ca7293ee103234b9c6e9fea78673eb0fefabfa642e92aa634f93ba5552a64"} Oct 14 13:18:53 crc kubenswrapper[4725]: I1014 13:18:53.298581 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bkn7h" event={"ID":"04c32971-b976-46b3-b96c-2bcc703b4dd0","Type":"ContainerDied","Data":"3e21b3f86352c1bafd583b0a2b1a1054dacb7e0dff1d1994f71e282feff9a6f3"} Oct 14 13:18:53 crc kubenswrapper[4725]: I1014 13:18:53.298615 4725 scope.go:117] "RemoveContainer" containerID="814ca7293ee103234b9c6e9fea78673eb0fefabfa642e92aa634f93ba5552a64" Oct 14 13:18:53 crc kubenswrapper[4725]: I1014 13:18:53.312803 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds6lb\" (UniqueName: \"kubernetes.io/projected/04c32971-b976-46b3-b96c-2bcc703b4dd0-kube-api-access-ds6lb\") on node \"crc\" DevicePath \"\"" Oct 14 13:18:53 crc kubenswrapper[4725]: I1014 13:18:53.312850 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04c32971-b976-46b3-b96c-2bcc703b4dd0-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 13:18:53 crc kubenswrapper[4725]: I1014 13:18:53.332832 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bkn7h"] Oct 14 13:18:53 crc kubenswrapper[4725]: I1014 13:18:53.334323 4725 scope.go:117] "RemoveContainer" containerID="b269f5371967d58c485d38fb1a8a50815237656ccde271b51861cfeb93c3483b" Oct 14 13:18:53 crc kubenswrapper[4725]: I1014 13:18:53.336904 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bkn7h"] Oct 14 13:18:53 crc kubenswrapper[4725]: I1014 13:18:53.356299 4725 scope.go:117] "RemoveContainer" containerID="c074d4b5db77768221b55db33cad78aa90de7a41afcc8c6a1cc52681ad25e5d8" Oct 14 13:18:53 crc kubenswrapper[4725]: I1014 13:18:53.372229 4725 scope.go:117] "RemoveContainer" containerID="814ca7293ee103234b9c6e9fea78673eb0fefabfa642e92aa634f93ba5552a64" Oct 14 13:18:53 crc kubenswrapper[4725]: E1014 13:18:53.372873 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"814ca7293ee103234b9c6e9fea78673eb0fefabfa642e92aa634f93ba5552a64\": container with ID starting with 814ca7293ee103234b9c6e9fea78673eb0fefabfa642e92aa634f93ba5552a64 not found: ID does not exist" containerID="814ca7293ee103234b9c6e9fea78673eb0fefabfa642e92aa634f93ba5552a64" Oct 14 13:18:53 crc kubenswrapper[4725]: I1014 13:18:53.373005 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"814ca7293ee103234b9c6e9fea78673eb0fefabfa642e92aa634f93ba5552a64"} err="failed to get container status \"814ca7293ee103234b9c6e9fea78673eb0fefabfa642e92aa634f93ba5552a64\": rpc error: code = NotFound desc = could not find container \"814ca7293ee103234b9c6e9fea78673eb0fefabfa642e92aa634f93ba5552a64\": container with ID starting with 814ca7293ee103234b9c6e9fea78673eb0fefabfa642e92aa634f93ba5552a64 not found: ID does not exist" Oct 14 13:18:53 crc kubenswrapper[4725]: I1014 13:18:53.373114 4725 scope.go:117] "RemoveContainer" containerID="b269f5371967d58c485d38fb1a8a50815237656ccde271b51861cfeb93c3483b" Oct 14 13:18:53 crc kubenswrapper[4725]: E1014 13:18:53.374318 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b269f5371967d58c485d38fb1a8a50815237656ccde271b51861cfeb93c3483b\": container with ID starting with b269f5371967d58c485d38fb1a8a50815237656ccde271b51861cfeb93c3483b not found: ID does not exist" containerID="b269f5371967d58c485d38fb1a8a50815237656ccde271b51861cfeb93c3483b" Oct 14 13:18:53 crc kubenswrapper[4725]: I1014 13:18:53.374381 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b269f5371967d58c485d38fb1a8a50815237656ccde271b51861cfeb93c3483b"} err="failed to get container status \"b269f5371967d58c485d38fb1a8a50815237656ccde271b51861cfeb93c3483b\": rpc error: code = NotFound desc = could not find container \"b269f5371967d58c485d38fb1a8a50815237656ccde271b51861cfeb93c3483b\": container with ID starting with b269f5371967d58c485d38fb1a8a50815237656ccde271b51861cfeb93c3483b not found: ID does not exist" Oct 14 13:18:53 crc kubenswrapper[4725]: I1014 13:18:53.374512 4725 scope.go:117] "RemoveContainer" containerID="c074d4b5db77768221b55db33cad78aa90de7a41afcc8c6a1cc52681ad25e5d8" Oct 14 13:18:53 crc kubenswrapper[4725]: E1014 13:18:53.375129 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c074d4b5db77768221b55db33cad78aa90de7a41afcc8c6a1cc52681ad25e5d8\": container with ID starting with c074d4b5db77768221b55db33cad78aa90de7a41afcc8c6a1cc52681ad25e5d8 not found: ID does not exist" containerID="c074d4b5db77768221b55db33cad78aa90de7a41afcc8c6a1cc52681ad25e5d8" Oct 14 13:18:53 crc kubenswrapper[4725]: I1014 13:18:53.375493 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c074d4b5db77768221b55db33cad78aa90de7a41afcc8c6a1cc52681ad25e5d8"} err="failed to get container status \"c074d4b5db77768221b55db33cad78aa90de7a41afcc8c6a1cc52681ad25e5d8\": rpc error: code = NotFound desc = could not find container \"c074d4b5db77768221b55db33cad78aa90de7a41afcc8c6a1cc52681ad25e5d8\": container with ID starting with c074d4b5db77768221b55db33cad78aa90de7a41afcc8c6a1cc52681ad25e5d8 not found: ID does not exist" Oct 14 13:18:53 crc kubenswrapper[4725]: I1014 13:18:53.928020 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04c32971-b976-46b3-b96c-2bcc703b4dd0" path="/var/lib/kubelet/pods/04c32971-b976-46b3-b96c-2bcc703b4dd0/volumes" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.424575 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6fsmm"] Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.425414 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6fsmm" podUID="57b7ca76-c75e-44f5-b428-6233a92bca51" containerName="registry-server" containerID="cri-o://a71ac9025aab4a47628a88fe0728c90233eb64f34ebfec97508968bb9fe50c1d" gracePeriod=30 Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.452362 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fwjbz"] Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.452808 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fwjbz" podUID="b4c5e303-50a7-4c4f-835f-651e69aba358" containerName="registry-server" containerID="cri-o://a9cc8e0b1bc6e0143fd5ec2dc0ede7fd500a35cf69950d4fc987cd58672f96dd" gracePeriod=30 Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.480166 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hb586"] Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.480554 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-hb586" podUID="aba4ae84-aa5c-4790-a5cc-b2c865bae5dd" containerName="marketplace-operator" containerID="cri-o://2237d670b3b2868ab6ba323e777265ab33059959b81c97d04b37e51584eec22f" gracePeriod=30 Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.492748 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vq7zq"] Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.506520 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g9pj7"] Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.506813 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g9pj7" podUID="15dbec8a-0d83-438c-b58b-3eb5aafa1f95" containerName="registry-server" containerID="cri-o://73db548c4f761d59f43893cbe7d37266aecb18d9a07f0fa8659bc545e98be789" gracePeriod=30 Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.521035 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dw67l"] Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.521333 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dw67l" podUID="cc7e8275-cf1b-4d13-9ba5-95f65a961049" containerName="registry-server" containerID="cri-o://9dec9d1b7a60e1db5de66e27051be6529a55113c3db3b72089f78e81ebd5208c" gracePeriod=30 Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.542899 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-q66z9"] Oct 14 13:19:49 crc kubenswrapper[4725]: E1014 13:19:49.543170 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04c32971-b976-46b3-b96c-2bcc703b4dd0" containerName="extract-utilities" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.543188 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="04c32971-b976-46b3-b96c-2bcc703b4dd0" containerName="extract-utilities" Oct 14 13:19:49 crc kubenswrapper[4725]: E1014 13:19:49.543200 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b02b86e-fa0a-4465-9613-44ab7d201daf" containerName="pruner" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.543207 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b02b86e-fa0a-4465-9613-44ab7d201daf" containerName="pruner" Oct 14 13:19:49 crc kubenswrapper[4725]: E1014 13:19:49.543222 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72e152ab-fabb-4551-b0f9-7c520824edef" containerName="extract-content" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.543230 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="72e152ab-fabb-4551-b0f9-7c520824edef" containerName="extract-content" Oct 14 13:19:49 crc kubenswrapper[4725]: E1014 13:19:49.543251 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70da4b97-6b5f-4aae-93d3-ef2593f042c0" containerName="registry-server" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.543260 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="70da4b97-6b5f-4aae-93d3-ef2593f042c0" containerName="registry-server" Oct 14 13:19:49 crc kubenswrapper[4725]: E1014 13:19:49.543269 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72e152ab-fabb-4551-b0f9-7c520824edef" containerName="extract-utilities" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.543276 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="72e152ab-fabb-4551-b0f9-7c520824edef" containerName="extract-utilities" Oct 14 13:19:49 crc kubenswrapper[4725]: E1014 13:19:49.543285 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70da4b97-6b5f-4aae-93d3-ef2593f042c0" containerName="extract-content" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.543292 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="70da4b97-6b5f-4aae-93d3-ef2593f042c0" containerName="extract-content" Oct 14 13:19:49 crc kubenswrapper[4725]: E1014 13:19:49.543301 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70da4b97-6b5f-4aae-93d3-ef2593f042c0" containerName="extract-utilities" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.543320 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="70da4b97-6b5f-4aae-93d3-ef2593f042c0" containerName="extract-utilities" Oct 14 13:19:49 crc kubenswrapper[4725]: E1014 13:19:49.543329 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4687710d-b374-4609-ba37-39a6c13610e8" containerName="pruner" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.543336 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="4687710d-b374-4609-ba37-39a6c13610e8" containerName="pruner" Oct 14 13:19:49 crc kubenswrapper[4725]: E1014 13:19:49.543344 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04c32971-b976-46b3-b96c-2bcc703b4dd0" containerName="registry-server" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.543351 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="04c32971-b976-46b3-b96c-2bcc703b4dd0" containerName="registry-server" Oct 14 13:19:49 crc kubenswrapper[4725]: E1014 13:19:49.543364 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04c32971-b976-46b3-b96c-2bcc703b4dd0" containerName="extract-content" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.543370 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="04c32971-b976-46b3-b96c-2bcc703b4dd0" containerName="extract-content" Oct 14 13:19:49 crc kubenswrapper[4725]: E1014 13:19:49.543378 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4706b0b-daed-451d-814b-bd2aa92c3c11" containerName="extract-utilities" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.543385 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4706b0b-daed-451d-814b-bd2aa92c3c11" containerName="extract-utilities" Oct 14 13:19:49 crc kubenswrapper[4725]: E1014 13:19:49.543394 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4706b0b-daed-451d-814b-bd2aa92c3c11" containerName="extract-content" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.543401 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4706b0b-daed-451d-814b-bd2aa92c3c11" containerName="extract-content" Oct 14 13:19:49 crc kubenswrapper[4725]: E1014 13:19:49.543412 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72e152ab-fabb-4551-b0f9-7c520824edef" containerName="registry-server" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.543419 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="72e152ab-fabb-4551-b0f9-7c520824edef" containerName="registry-server" Oct 14 13:19:49 crc kubenswrapper[4725]: E1014 13:19:49.543427 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4706b0b-daed-451d-814b-bd2aa92c3c11" containerName="registry-server" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.543434 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4706b0b-daed-451d-814b-bd2aa92c3c11" containerName="registry-server" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.543569 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4706b0b-daed-451d-814b-bd2aa92c3c11" containerName="registry-server" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.543588 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="72e152ab-fabb-4551-b0f9-7c520824edef" containerName="registry-server" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.543599 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="70da4b97-6b5f-4aae-93d3-ef2593f042c0" containerName="registry-server" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.543612 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="4687710d-b374-4609-ba37-39a6c13610e8" containerName="pruner" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.543624 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b02b86e-fa0a-4465-9613-44ab7d201daf" containerName="pruner" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.543631 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="04c32971-b976-46b3-b96c-2bcc703b4dd0" containerName="registry-server" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.544110 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-q66z9" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.554371 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-q66z9"] Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.666873 4725 generic.go:334] "Generic (PLEG): container finished" podID="aba4ae84-aa5c-4790-a5cc-b2c865bae5dd" containerID="2237d670b3b2868ab6ba323e777265ab33059959b81c97d04b37e51584eec22f" exitCode=0 Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.667159 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hb586" event={"ID":"aba4ae84-aa5c-4790-a5cc-b2c865bae5dd","Type":"ContainerDied","Data":"2237d670b3b2868ab6ba323e777265ab33059959b81c97d04b37e51584eec22f"} Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.694972 4725 generic.go:334] "Generic (PLEG): container finished" podID="57b7ca76-c75e-44f5-b428-6233a92bca51" containerID="a71ac9025aab4a47628a88fe0728c90233eb64f34ebfec97508968bb9fe50c1d" exitCode=0 Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.695153 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fsmm" event={"ID":"57b7ca76-c75e-44f5-b428-6233a92bca51","Type":"ContainerDied","Data":"a71ac9025aab4a47628a88fe0728c90233eb64f34ebfec97508968bb9fe50c1d"} Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.700323 4725 generic.go:334] "Generic (PLEG): container finished" podID="b4c5e303-50a7-4c4f-835f-651e69aba358" containerID="a9cc8e0b1bc6e0143fd5ec2dc0ede7fd500a35cf69950d4fc987cd58672f96dd" exitCode=0 Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.700402 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwjbz" event={"ID":"b4c5e303-50a7-4c4f-835f-651e69aba358","Type":"ContainerDied","Data":"a9cc8e0b1bc6e0143fd5ec2dc0ede7fd500a35cf69950d4fc987cd58672f96dd"} Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.702757 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3b46f078-a8dc-4eaa-a657-4f6c85c19c06-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-q66z9\" (UID: \"3b46f078-a8dc-4eaa-a657-4f6c85c19c06\") " pod="openshift-marketplace/marketplace-operator-79b997595-q66z9" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.702800 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42t2l\" (UniqueName: \"kubernetes.io/projected/3b46f078-a8dc-4eaa-a657-4f6c85c19c06-kube-api-access-42t2l\") pod \"marketplace-operator-79b997595-q66z9\" (UID: \"3b46f078-a8dc-4eaa-a657-4f6c85c19c06\") " pod="openshift-marketplace/marketplace-operator-79b997595-q66z9" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.702830 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3b46f078-a8dc-4eaa-a657-4f6c85c19c06-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-q66z9\" (UID: \"3b46f078-a8dc-4eaa-a657-4f6c85c19c06\") " pod="openshift-marketplace/marketplace-operator-79b997595-q66z9" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.703140 4725 generic.go:334] "Generic (PLEG): container finished" podID="15dbec8a-0d83-438c-b58b-3eb5aafa1f95" containerID="73db548c4f761d59f43893cbe7d37266aecb18d9a07f0fa8659bc545e98be789" exitCode=0 Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.703168 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g9pj7" event={"ID":"15dbec8a-0d83-438c-b58b-3eb5aafa1f95","Type":"ContainerDied","Data":"73db548c4f761d59f43893cbe7d37266aecb18d9a07f0fa8659bc545e98be789"} Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.805113 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3b46f078-a8dc-4eaa-a657-4f6c85c19c06-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-q66z9\" (UID: \"3b46f078-a8dc-4eaa-a657-4f6c85c19c06\") " pod="openshift-marketplace/marketplace-operator-79b997595-q66z9" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.805186 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42t2l\" (UniqueName: \"kubernetes.io/projected/3b46f078-a8dc-4eaa-a657-4f6c85c19c06-kube-api-access-42t2l\") pod \"marketplace-operator-79b997595-q66z9\" (UID: \"3b46f078-a8dc-4eaa-a657-4f6c85c19c06\") " pod="openshift-marketplace/marketplace-operator-79b997595-q66z9" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.805219 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3b46f078-a8dc-4eaa-a657-4f6c85c19c06-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-q66z9\" (UID: \"3b46f078-a8dc-4eaa-a657-4f6c85c19c06\") " pod="openshift-marketplace/marketplace-operator-79b997595-q66z9" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.807849 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3b46f078-a8dc-4eaa-a657-4f6c85c19c06-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-q66z9\" (UID: \"3b46f078-a8dc-4eaa-a657-4f6c85c19c06\") " pod="openshift-marketplace/marketplace-operator-79b997595-q66z9" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.811887 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3b46f078-a8dc-4eaa-a657-4f6c85c19c06-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-q66z9\" (UID: \"3b46f078-a8dc-4eaa-a657-4f6c85c19c06\") " pod="openshift-marketplace/marketplace-operator-79b997595-q66z9" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.837748 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42t2l\" (UniqueName: \"kubernetes.io/projected/3b46f078-a8dc-4eaa-a657-4f6c85c19c06-kube-api-access-42t2l\") pod \"marketplace-operator-79b997595-q66z9\" (UID: \"3b46f078-a8dc-4eaa-a657-4f6c85c19c06\") " pod="openshift-marketplace/marketplace-operator-79b997595-q66z9" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.889203 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-q66z9" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.918037 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hb586" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.986298 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fwjbz" Oct 14 13:19:49 crc kubenswrapper[4725]: I1014 13:19:49.990269 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g9pj7" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.011829 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aba4ae84-aa5c-4790-a5cc-b2c865bae5dd-marketplace-trusted-ca\") pod \"aba4ae84-aa5c-4790-a5cc-b2c865bae5dd\" (UID: \"aba4ae84-aa5c-4790-a5cc-b2c865bae5dd\") " Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.011873 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7f2s6\" (UniqueName: \"kubernetes.io/projected/aba4ae84-aa5c-4790-a5cc-b2c865bae5dd-kube-api-access-7f2s6\") pod \"aba4ae84-aa5c-4790-a5cc-b2c865bae5dd\" (UID: \"aba4ae84-aa5c-4790-a5cc-b2c865bae5dd\") " Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.011926 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aba4ae84-aa5c-4790-a5cc-b2c865bae5dd-marketplace-operator-metrics\") pod \"aba4ae84-aa5c-4790-a5cc-b2c865bae5dd\" (UID: \"aba4ae84-aa5c-4790-a5cc-b2c865bae5dd\") " Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.015114 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aba4ae84-aa5c-4790-a5cc-b2c865bae5dd-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "aba4ae84-aa5c-4790-a5cc-b2c865bae5dd" (UID: "aba4ae84-aa5c-4790-a5cc-b2c865bae5dd"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.027278 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aba4ae84-aa5c-4790-a5cc-b2c865bae5dd-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "aba4ae84-aa5c-4790-a5cc-b2c865bae5dd" (UID: "aba4ae84-aa5c-4790-a5cc-b2c865bae5dd"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.028595 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aba4ae84-aa5c-4790-a5cc-b2c865bae5dd-kube-api-access-7f2s6" (OuterVolumeSpecName: "kube-api-access-7f2s6") pod "aba4ae84-aa5c-4790-a5cc-b2c865bae5dd" (UID: "aba4ae84-aa5c-4790-a5cc-b2c865bae5dd"). InnerVolumeSpecName "kube-api-access-7f2s6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.086245 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6fsmm" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.096171 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dw67l" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.113719 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptcqc\" (UniqueName: \"kubernetes.io/projected/15dbec8a-0d83-438c-b58b-3eb5aafa1f95-kube-api-access-ptcqc\") pod \"15dbec8a-0d83-438c-b58b-3eb5aafa1f95\" (UID: \"15dbec8a-0d83-438c-b58b-3eb5aafa1f95\") " Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.113784 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4c5e303-50a7-4c4f-835f-651e69aba358-utilities\") pod \"b4c5e303-50a7-4c4f-835f-651e69aba358\" (UID: \"b4c5e303-50a7-4c4f-835f-651e69aba358\") " Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.113825 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15dbec8a-0d83-438c-b58b-3eb5aafa1f95-utilities\") pod \"15dbec8a-0d83-438c-b58b-3eb5aafa1f95\" (UID: \"15dbec8a-0d83-438c-b58b-3eb5aafa1f95\") " Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.113851 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15dbec8a-0d83-438c-b58b-3eb5aafa1f95-catalog-content\") pod \"15dbec8a-0d83-438c-b58b-3eb5aafa1f95\" (UID: \"15dbec8a-0d83-438c-b58b-3eb5aafa1f95\") " Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.113884 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4c5e303-50a7-4c4f-835f-651e69aba358-catalog-content\") pod \"b4c5e303-50a7-4c4f-835f-651e69aba358\" (UID: \"b4c5e303-50a7-4c4f-835f-651e69aba358\") " Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.113949 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcljq\" (UniqueName: \"kubernetes.io/projected/b4c5e303-50a7-4c4f-835f-651e69aba358-kube-api-access-vcljq\") pod \"b4c5e303-50a7-4c4f-835f-651e69aba358\" (UID: \"b4c5e303-50a7-4c4f-835f-651e69aba358\") " Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.114174 4725 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aba4ae84-aa5c-4790-a5cc-b2c865bae5dd-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.114190 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7f2s6\" (UniqueName: \"kubernetes.io/projected/aba4ae84-aa5c-4790-a5cc-b2c865bae5dd-kube-api-access-7f2s6\") on node \"crc\" DevicePath \"\"" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.114202 4725 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aba4ae84-aa5c-4790-a5cc-b2c865bae5dd-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.115290 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15dbec8a-0d83-438c-b58b-3eb5aafa1f95-utilities" (OuterVolumeSpecName: "utilities") pod "15dbec8a-0d83-438c-b58b-3eb5aafa1f95" (UID: "15dbec8a-0d83-438c-b58b-3eb5aafa1f95"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.115321 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4c5e303-50a7-4c4f-835f-651e69aba358-utilities" (OuterVolumeSpecName: "utilities") pod "b4c5e303-50a7-4c4f-835f-651e69aba358" (UID: "b4c5e303-50a7-4c4f-835f-651e69aba358"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.120066 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15dbec8a-0d83-438c-b58b-3eb5aafa1f95-kube-api-access-ptcqc" (OuterVolumeSpecName: "kube-api-access-ptcqc") pod "15dbec8a-0d83-438c-b58b-3eb5aafa1f95" (UID: "15dbec8a-0d83-438c-b58b-3eb5aafa1f95"). InnerVolumeSpecName "kube-api-access-ptcqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.126841 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4c5e303-50a7-4c4f-835f-651e69aba358-kube-api-access-vcljq" (OuterVolumeSpecName: "kube-api-access-vcljq") pod "b4c5e303-50a7-4c4f-835f-651e69aba358" (UID: "b4c5e303-50a7-4c4f-835f-651e69aba358"). InnerVolumeSpecName "kube-api-access-vcljq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.142385 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15dbec8a-0d83-438c-b58b-3eb5aafa1f95-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15dbec8a-0d83-438c-b58b-3eb5aafa1f95" (UID: "15dbec8a-0d83-438c-b58b-3eb5aafa1f95"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.168498 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-q66z9"] Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.203746 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4c5e303-50a7-4c4f-835f-651e69aba358-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4c5e303-50a7-4c4f-835f-651e69aba358" (UID: "b4c5e303-50a7-4c4f-835f-651e69aba358"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.214875 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc7e8275-cf1b-4d13-9ba5-95f65a961049-catalog-content\") pod \"cc7e8275-cf1b-4d13-9ba5-95f65a961049\" (UID: \"cc7e8275-cf1b-4d13-9ba5-95f65a961049\") " Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.215133 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vvxh\" (UniqueName: \"kubernetes.io/projected/cc7e8275-cf1b-4d13-9ba5-95f65a961049-kube-api-access-6vvxh\") pod \"cc7e8275-cf1b-4d13-9ba5-95f65a961049\" (UID: \"cc7e8275-cf1b-4d13-9ba5-95f65a961049\") " Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.215251 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57b7ca76-c75e-44f5-b428-6233a92bca51-catalog-content\") pod \"57b7ca76-c75e-44f5-b428-6233a92bca51\" (UID: \"57b7ca76-c75e-44f5-b428-6233a92bca51\") " Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.215342 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnxlj\" (UniqueName: \"kubernetes.io/projected/57b7ca76-c75e-44f5-b428-6233a92bca51-kube-api-access-jnxlj\") pod \"57b7ca76-c75e-44f5-b428-6233a92bca51\" (UID: \"57b7ca76-c75e-44f5-b428-6233a92bca51\") " Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.215442 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc7e8275-cf1b-4d13-9ba5-95f65a961049-utilities\") pod \"cc7e8275-cf1b-4d13-9ba5-95f65a961049\" (UID: \"cc7e8275-cf1b-4d13-9ba5-95f65a961049\") " Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.215549 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57b7ca76-c75e-44f5-b428-6233a92bca51-utilities\") pod \"57b7ca76-c75e-44f5-b428-6233a92bca51\" (UID: \"57b7ca76-c75e-44f5-b428-6233a92bca51\") " Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.215820 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptcqc\" (UniqueName: \"kubernetes.io/projected/15dbec8a-0d83-438c-b58b-3eb5aafa1f95-kube-api-access-ptcqc\") on node \"crc\" DevicePath \"\"" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.215887 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4c5e303-50a7-4c4f-835f-651e69aba358-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.215948 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15dbec8a-0d83-438c-b58b-3eb5aafa1f95-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.216003 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15dbec8a-0d83-438c-b58b-3eb5aafa1f95-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.216067 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4c5e303-50a7-4c4f-835f-651e69aba358-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.216124 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcljq\" (UniqueName: \"kubernetes.io/projected/b4c5e303-50a7-4c4f-835f-651e69aba358-kube-api-access-vcljq\") on node \"crc\" DevicePath \"\"" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.216984 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc7e8275-cf1b-4d13-9ba5-95f65a961049-utilities" (OuterVolumeSpecName: "utilities") pod "cc7e8275-cf1b-4d13-9ba5-95f65a961049" (UID: "cc7e8275-cf1b-4d13-9ba5-95f65a961049"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.218048 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc7e8275-cf1b-4d13-9ba5-95f65a961049-kube-api-access-6vvxh" (OuterVolumeSpecName: "kube-api-access-6vvxh") pod "cc7e8275-cf1b-4d13-9ba5-95f65a961049" (UID: "cc7e8275-cf1b-4d13-9ba5-95f65a961049"). InnerVolumeSpecName "kube-api-access-6vvxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.218648 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57b7ca76-c75e-44f5-b428-6233a92bca51-utilities" (OuterVolumeSpecName: "utilities") pod "57b7ca76-c75e-44f5-b428-6233a92bca51" (UID: "57b7ca76-c75e-44f5-b428-6233a92bca51"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.219868 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57b7ca76-c75e-44f5-b428-6233a92bca51-kube-api-access-jnxlj" (OuterVolumeSpecName: "kube-api-access-jnxlj") pod "57b7ca76-c75e-44f5-b428-6233a92bca51" (UID: "57b7ca76-c75e-44f5-b428-6233a92bca51"). InnerVolumeSpecName "kube-api-access-jnxlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.271217 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57b7ca76-c75e-44f5-b428-6233a92bca51-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57b7ca76-c75e-44f5-b428-6233a92bca51" (UID: "57b7ca76-c75e-44f5-b428-6233a92bca51"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.297419 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc7e8275-cf1b-4d13-9ba5-95f65a961049-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc7e8275-cf1b-4d13-9ba5-95f65a961049" (UID: "cc7e8275-cf1b-4d13-9ba5-95f65a961049"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.317290 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc7e8275-cf1b-4d13-9ba5-95f65a961049-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.317338 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vvxh\" (UniqueName: \"kubernetes.io/projected/cc7e8275-cf1b-4d13-9ba5-95f65a961049-kube-api-access-6vvxh\") on node \"crc\" DevicePath \"\"" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.317353 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57b7ca76-c75e-44f5-b428-6233a92bca51-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.317364 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnxlj\" (UniqueName: \"kubernetes.io/projected/57b7ca76-c75e-44f5-b428-6233a92bca51-kube-api-access-jnxlj\") on node \"crc\" DevicePath \"\"" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.317374 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc7e8275-cf1b-4d13-9ba5-95f65a961049-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.317384 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57b7ca76-c75e-44f5-b428-6233a92bca51-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.709155 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-q66z9" event={"ID":"3b46f078-a8dc-4eaa-a657-4f6c85c19c06","Type":"ContainerStarted","Data":"f5ad3c50ccc842a27f6a2a2dd5bbf037de8d6e72921d38a34420d8cebe7d5a69"} Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.709224 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-q66z9" event={"ID":"3b46f078-a8dc-4eaa-a657-4f6c85c19c06","Type":"ContainerStarted","Data":"471200b1c07d402ebf32f580b926815de2178883e693bb73d9ea9fb8004580cb"} Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.709352 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-q66z9" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.711993 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fsmm" event={"ID":"57b7ca76-c75e-44f5-b428-6233a92bca51","Type":"ContainerDied","Data":"15ca8843bf2221923b2a8edcc7a1bf085d099a2fe09cd34c97245e2e2084e561"} Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.712068 4725 scope.go:117] "RemoveContainer" containerID="a71ac9025aab4a47628a88fe0728c90233eb64f34ebfec97508968bb9fe50c1d" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.712310 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6fsmm" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.721023 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-q66z9" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.722027 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fwjbz" event={"ID":"b4c5e303-50a7-4c4f-835f-651e69aba358","Type":"ContainerDied","Data":"7ec7c8109ef331e8468734da5bc0c80907c3ba41440e3e7d20c03aa4d18d303e"} Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.722117 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fwjbz" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.724063 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g9pj7" event={"ID":"15dbec8a-0d83-438c-b58b-3eb5aafa1f95","Type":"ContainerDied","Data":"5af0b34ec3f59f2352e60a16b1089f51b5c08dadc923acb6902030370dcf8635"} Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.724157 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g9pj7" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.731979 4725 generic.go:334] "Generic (PLEG): container finished" podID="cc7e8275-cf1b-4d13-9ba5-95f65a961049" containerID="9dec9d1b7a60e1db5de66e27051be6529a55113c3db3b72089f78e81ebd5208c" exitCode=0 Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.732052 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dw67l" event={"ID":"cc7e8275-cf1b-4d13-9ba5-95f65a961049","Type":"ContainerDied","Data":"9dec9d1b7a60e1db5de66e27051be6529a55113c3db3b72089f78e81ebd5208c"} Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.732077 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dw67l" event={"ID":"cc7e8275-cf1b-4d13-9ba5-95f65a961049","Type":"ContainerDied","Data":"081b9d8e0ec3341dc0e9edcd905af1442e183f37f81b377ed88b47aad160e832"} Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.732162 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dw67l" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.735199 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hb586" event={"ID":"aba4ae84-aa5c-4790-a5cc-b2c865bae5dd","Type":"ContainerDied","Data":"38dfb4112cbe4f581c74620e3056bbe8fa68a2b77f99fd0231ec72110d8f003d"} Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.735260 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hb586" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.739225 4725 scope.go:117] "RemoveContainer" containerID="cc8fa8fd2d6af923a8ddf93a8266612d5120b6d11e434b855dd743bb8e7bfd77" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.744971 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-q66z9" podStartSLOduration=1.744950385 podStartE2EDuration="1.744950385s" podCreationTimestamp="2025-10-14 13:19:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:19:50.743357439 +0000 UTC m=+307.591792258" watchObservedRunningTime="2025-10-14 13:19:50.744950385 +0000 UTC m=+307.593385194" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.759041 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6fsmm"] Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.770432 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6fsmm"] Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.773593 4725 scope.go:117] "RemoveContainer" containerID="b58894bbab9a1204c8a4f2e9ead2b2379312196bf39cc462d135fbd887985a64" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.792568 4725 scope.go:117] "RemoveContainer" containerID="a9cc8e0b1bc6e0143fd5ec2dc0ede7fd500a35cf69950d4fc987cd58672f96dd" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.808424 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fwjbz"] Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.814413 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fwjbz"] Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.822483 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g9pj7"] Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.826087 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g9pj7"] Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.830384 4725 scope.go:117] "RemoveContainer" containerID="d503808146882bcdf595f2f9627552a2bce5df141c5ee1b051b7bdc1365f1a66" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.840569 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dw67l"] Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.843789 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dw67l"] Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.891127 4725 scope.go:117] "RemoveContainer" containerID="938b37417736147a73ed125021fffb83138df821b3e9549f00a2c804682d6790" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.908948 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hb586"] Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.932424 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hb586"] Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.943624 4725 scope.go:117] "RemoveContainer" containerID="73db548c4f761d59f43893cbe7d37266aecb18d9a07f0fa8659bc545e98be789" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.960657 4725 scope.go:117] "RemoveContainer" containerID="a513be71a6f537afe641d601cf112fc3c83babfb98153d0cb0bb9cd9d94a32c3" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.972607 4725 scope.go:117] "RemoveContainer" containerID="51047a23ff444ea88e3470372bf44ae4252887a2bcb6908890c659b70358935d" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.983505 4725 scope.go:117] "RemoveContainer" containerID="9dec9d1b7a60e1db5de66e27051be6529a55113c3db3b72089f78e81ebd5208c" Oct 14 13:19:50 crc kubenswrapper[4725]: I1014 13:19:50.994498 4725 scope.go:117] "RemoveContainer" containerID="39021970d56810f431fd99f12a7d91ba15da7f48aedd498cf595bde016de9e4b" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.006267 4725 scope.go:117] "RemoveContainer" containerID="bcc9635aab3450c4443c53c039c8446770ffec1359a9b0f7286a1c7d4336e2b2" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.017925 4725 scope.go:117] "RemoveContainer" containerID="9dec9d1b7a60e1db5de66e27051be6529a55113c3db3b72089f78e81ebd5208c" Oct 14 13:19:51 crc kubenswrapper[4725]: E1014 13:19:51.018243 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dec9d1b7a60e1db5de66e27051be6529a55113c3db3b72089f78e81ebd5208c\": container with ID starting with 9dec9d1b7a60e1db5de66e27051be6529a55113c3db3b72089f78e81ebd5208c not found: ID does not exist" containerID="9dec9d1b7a60e1db5de66e27051be6529a55113c3db3b72089f78e81ebd5208c" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.018296 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dec9d1b7a60e1db5de66e27051be6529a55113c3db3b72089f78e81ebd5208c"} err="failed to get container status \"9dec9d1b7a60e1db5de66e27051be6529a55113c3db3b72089f78e81ebd5208c\": rpc error: code = NotFound desc = could not find container \"9dec9d1b7a60e1db5de66e27051be6529a55113c3db3b72089f78e81ebd5208c\": container with ID starting with 9dec9d1b7a60e1db5de66e27051be6529a55113c3db3b72089f78e81ebd5208c not found: ID does not exist" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.018330 4725 scope.go:117] "RemoveContainer" containerID="39021970d56810f431fd99f12a7d91ba15da7f48aedd498cf595bde016de9e4b" Oct 14 13:19:51 crc kubenswrapper[4725]: E1014 13:19:51.018680 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39021970d56810f431fd99f12a7d91ba15da7f48aedd498cf595bde016de9e4b\": container with ID starting with 39021970d56810f431fd99f12a7d91ba15da7f48aedd498cf595bde016de9e4b not found: ID does not exist" containerID="39021970d56810f431fd99f12a7d91ba15da7f48aedd498cf595bde016de9e4b" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.018709 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39021970d56810f431fd99f12a7d91ba15da7f48aedd498cf595bde016de9e4b"} err="failed to get container status \"39021970d56810f431fd99f12a7d91ba15da7f48aedd498cf595bde016de9e4b\": rpc error: code = NotFound desc = could not find container \"39021970d56810f431fd99f12a7d91ba15da7f48aedd498cf595bde016de9e4b\": container with ID starting with 39021970d56810f431fd99f12a7d91ba15da7f48aedd498cf595bde016de9e4b not found: ID does not exist" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.018725 4725 scope.go:117] "RemoveContainer" containerID="bcc9635aab3450c4443c53c039c8446770ffec1359a9b0f7286a1c7d4336e2b2" Oct 14 13:19:51 crc kubenswrapper[4725]: E1014 13:19:51.019733 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcc9635aab3450c4443c53c039c8446770ffec1359a9b0f7286a1c7d4336e2b2\": container with ID starting with bcc9635aab3450c4443c53c039c8446770ffec1359a9b0f7286a1c7d4336e2b2 not found: ID does not exist" containerID="bcc9635aab3450c4443c53c039c8446770ffec1359a9b0f7286a1c7d4336e2b2" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.019770 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcc9635aab3450c4443c53c039c8446770ffec1359a9b0f7286a1c7d4336e2b2"} err="failed to get container status \"bcc9635aab3450c4443c53c039c8446770ffec1359a9b0f7286a1c7d4336e2b2\": rpc error: code = NotFound desc = could not find container \"bcc9635aab3450c4443c53c039c8446770ffec1359a9b0f7286a1c7d4336e2b2\": container with ID starting with bcc9635aab3450c4443c53c039c8446770ffec1359a9b0f7286a1c7d4336e2b2 not found: ID does not exist" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.019809 4725 scope.go:117] "RemoveContainer" containerID="2237d670b3b2868ab6ba323e777265ab33059959b81c97d04b37e51584eec22f" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.641446 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xmq8l"] Oct 14 13:19:51 crc kubenswrapper[4725]: E1014 13:19:51.641666 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc7e8275-cf1b-4d13-9ba5-95f65a961049" containerName="extract-utilities" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.641680 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc7e8275-cf1b-4d13-9ba5-95f65a961049" containerName="extract-utilities" Oct 14 13:19:51 crc kubenswrapper[4725]: E1014 13:19:51.641688 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4c5e303-50a7-4c4f-835f-651e69aba358" containerName="extract-content" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.641694 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4c5e303-50a7-4c4f-835f-651e69aba358" containerName="extract-content" Oct 14 13:19:51 crc kubenswrapper[4725]: E1014 13:19:51.641703 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15dbec8a-0d83-438c-b58b-3eb5aafa1f95" containerName="extract-content" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.641709 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="15dbec8a-0d83-438c-b58b-3eb5aafa1f95" containerName="extract-content" Oct 14 13:19:51 crc kubenswrapper[4725]: E1014 13:19:51.641719 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57b7ca76-c75e-44f5-b428-6233a92bca51" containerName="extract-content" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.641725 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="57b7ca76-c75e-44f5-b428-6233a92bca51" containerName="extract-content" Oct 14 13:19:51 crc kubenswrapper[4725]: E1014 13:19:51.641732 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc7e8275-cf1b-4d13-9ba5-95f65a961049" containerName="registry-server" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.641739 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc7e8275-cf1b-4d13-9ba5-95f65a961049" containerName="registry-server" Oct 14 13:19:51 crc kubenswrapper[4725]: E1014 13:19:51.641748 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aba4ae84-aa5c-4790-a5cc-b2c865bae5dd" containerName="marketplace-operator" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.641753 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="aba4ae84-aa5c-4790-a5cc-b2c865bae5dd" containerName="marketplace-operator" Oct 14 13:19:51 crc kubenswrapper[4725]: E1014 13:19:51.641760 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc7e8275-cf1b-4d13-9ba5-95f65a961049" containerName="extract-content" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.641766 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc7e8275-cf1b-4d13-9ba5-95f65a961049" containerName="extract-content" Oct 14 13:19:51 crc kubenswrapper[4725]: E1014 13:19:51.641776 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57b7ca76-c75e-44f5-b428-6233a92bca51" containerName="registry-server" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.641783 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="57b7ca76-c75e-44f5-b428-6233a92bca51" containerName="registry-server" Oct 14 13:19:51 crc kubenswrapper[4725]: E1014 13:19:51.641792 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4c5e303-50a7-4c4f-835f-651e69aba358" containerName="extract-utilities" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.641799 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4c5e303-50a7-4c4f-835f-651e69aba358" containerName="extract-utilities" Oct 14 13:19:51 crc kubenswrapper[4725]: E1014 13:19:51.641809 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15dbec8a-0d83-438c-b58b-3eb5aafa1f95" containerName="registry-server" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.641816 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="15dbec8a-0d83-438c-b58b-3eb5aafa1f95" containerName="registry-server" Oct 14 13:19:51 crc kubenswrapper[4725]: E1014 13:19:51.641828 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4c5e303-50a7-4c4f-835f-651e69aba358" containerName="registry-server" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.641834 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4c5e303-50a7-4c4f-835f-651e69aba358" containerName="registry-server" Oct 14 13:19:51 crc kubenswrapper[4725]: E1014 13:19:51.641845 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15dbec8a-0d83-438c-b58b-3eb5aafa1f95" containerName="extract-utilities" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.641850 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="15dbec8a-0d83-438c-b58b-3eb5aafa1f95" containerName="extract-utilities" Oct 14 13:19:51 crc kubenswrapper[4725]: E1014 13:19:51.641859 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57b7ca76-c75e-44f5-b428-6233a92bca51" containerName="extract-utilities" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.641865 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="57b7ca76-c75e-44f5-b428-6233a92bca51" containerName="extract-utilities" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.641947 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="15dbec8a-0d83-438c-b58b-3eb5aafa1f95" containerName="registry-server" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.641963 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="aba4ae84-aa5c-4790-a5cc-b2c865bae5dd" containerName="marketplace-operator" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.641970 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="57b7ca76-c75e-44f5-b428-6233a92bca51" containerName="registry-server" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.641980 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc7e8275-cf1b-4d13-9ba5-95f65a961049" containerName="registry-server" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.641986 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4c5e303-50a7-4c4f-835f-651e69aba358" containerName="registry-server" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.642673 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xmq8l" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.645107 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.654926 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xmq8l"] Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.736972 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8154e8c-6473-4716-ba8a-b4141852b960-utilities\") pod \"redhat-marketplace-xmq8l\" (UID: \"d8154e8c-6473-4716-ba8a-b4141852b960\") " pod="openshift-marketplace/redhat-marketplace-xmq8l" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.737670 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8154e8c-6473-4716-ba8a-b4141852b960-catalog-content\") pod \"redhat-marketplace-xmq8l\" (UID: \"d8154e8c-6473-4716-ba8a-b4141852b960\") " pod="openshift-marketplace/redhat-marketplace-xmq8l" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.737802 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f8t8\" (UniqueName: \"kubernetes.io/projected/d8154e8c-6473-4716-ba8a-b4141852b960-kube-api-access-4f8t8\") pod \"redhat-marketplace-xmq8l\" (UID: \"d8154e8c-6473-4716-ba8a-b4141852b960\") " pod="openshift-marketplace/redhat-marketplace-xmq8l" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.839277 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8154e8c-6473-4716-ba8a-b4141852b960-catalog-content\") pod \"redhat-marketplace-xmq8l\" (UID: \"d8154e8c-6473-4716-ba8a-b4141852b960\") " pod="openshift-marketplace/redhat-marketplace-xmq8l" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.839427 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f8t8\" (UniqueName: \"kubernetes.io/projected/d8154e8c-6473-4716-ba8a-b4141852b960-kube-api-access-4f8t8\") pod \"redhat-marketplace-xmq8l\" (UID: \"d8154e8c-6473-4716-ba8a-b4141852b960\") " pod="openshift-marketplace/redhat-marketplace-xmq8l" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.839511 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8154e8c-6473-4716-ba8a-b4141852b960-utilities\") pod \"redhat-marketplace-xmq8l\" (UID: \"d8154e8c-6473-4716-ba8a-b4141852b960\") " pod="openshift-marketplace/redhat-marketplace-xmq8l" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.840303 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8154e8c-6473-4716-ba8a-b4141852b960-catalog-content\") pod \"redhat-marketplace-xmq8l\" (UID: \"d8154e8c-6473-4716-ba8a-b4141852b960\") " pod="openshift-marketplace/redhat-marketplace-xmq8l" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.841283 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g5hgh"] Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.843114 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g5hgh" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.843834 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8154e8c-6473-4716-ba8a-b4141852b960-utilities\") pod \"redhat-marketplace-xmq8l\" (UID: \"d8154e8c-6473-4716-ba8a-b4141852b960\") " pod="openshift-marketplace/redhat-marketplace-xmq8l" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.845058 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.853032 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g5hgh"] Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.865124 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f8t8\" (UniqueName: \"kubernetes.io/projected/d8154e8c-6473-4716-ba8a-b4141852b960-kube-api-access-4f8t8\") pod \"redhat-marketplace-xmq8l\" (UID: \"d8154e8c-6473-4716-ba8a-b4141852b960\") " pod="openshift-marketplace/redhat-marketplace-xmq8l" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.928987 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15dbec8a-0d83-438c-b58b-3eb5aafa1f95" path="/var/lib/kubelet/pods/15dbec8a-0d83-438c-b58b-3eb5aafa1f95/volumes" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.930774 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57b7ca76-c75e-44f5-b428-6233a92bca51" path="/var/lib/kubelet/pods/57b7ca76-c75e-44f5-b428-6233a92bca51/volumes" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.932644 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aba4ae84-aa5c-4790-a5cc-b2c865bae5dd" path="/var/lib/kubelet/pods/aba4ae84-aa5c-4790-a5cc-b2c865bae5dd/volumes" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.935184 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4c5e303-50a7-4c4f-835f-651e69aba358" path="/var/lib/kubelet/pods/b4c5e303-50a7-4c4f-835f-651e69aba358/volumes" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.937213 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc7e8275-cf1b-4d13-9ba5-95f65a961049" path="/var/lib/kubelet/pods/cc7e8275-cf1b-4d13-9ba5-95f65a961049/volumes" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.940490 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48g9h\" (UniqueName: \"kubernetes.io/projected/58b9a349-dfe9-4cc9-851a-80c8bfc2f898-kube-api-access-48g9h\") pod \"redhat-operators-g5hgh\" (UID: \"58b9a349-dfe9-4cc9-851a-80c8bfc2f898\") " pod="openshift-marketplace/redhat-operators-g5hgh" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.940567 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58b9a349-dfe9-4cc9-851a-80c8bfc2f898-utilities\") pod \"redhat-operators-g5hgh\" (UID: \"58b9a349-dfe9-4cc9-851a-80c8bfc2f898\") " pod="openshift-marketplace/redhat-operators-g5hgh" Oct 14 13:19:51 crc kubenswrapper[4725]: I1014 13:19:51.940673 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58b9a349-dfe9-4cc9-851a-80c8bfc2f898-catalog-content\") pod \"redhat-operators-g5hgh\" (UID: \"58b9a349-dfe9-4cc9-851a-80c8bfc2f898\") " pod="openshift-marketplace/redhat-operators-g5hgh" Oct 14 13:19:52 crc kubenswrapper[4725]: I1014 13:19:52.012223 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xmq8l" Oct 14 13:19:52 crc kubenswrapper[4725]: I1014 13:19:52.042878 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58b9a349-dfe9-4cc9-851a-80c8bfc2f898-catalog-content\") pod \"redhat-operators-g5hgh\" (UID: \"58b9a349-dfe9-4cc9-851a-80c8bfc2f898\") " pod="openshift-marketplace/redhat-operators-g5hgh" Oct 14 13:19:52 crc kubenswrapper[4725]: I1014 13:19:52.043024 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48g9h\" (UniqueName: \"kubernetes.io/projected/58b9a349-dfe9-4cc9-851a-80c8bfc2f898-kube-api-access-48g9h\") pod \"redhat-operators-g5hgh\" (UID: \"58b9a349-dfe9-4cc9-851a-80c8bfc2f898\") " pod="openshift-marketplace/redhat-operators-g5hgh" Oct 14 13:19:52 crc kubenswrapper[4725]: I1014 13:19:52.043082 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58b9a349-dfe9-4cc9-851a-80c8bfc2f898-utilities\") pod \"redhat-operators-g5hgh\" (UID: \"58b9a349-dfe9-4cc9-851a-80c8bfc2f898\") " pod="openshift-marketplace/redhat-operators-g5hgh" Oct 14 13:19:52 crc kubenswrapper[4725]: I1014 13:19:52.043719 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58b9a349-dfe9-4cc9-851a-80c8bfc2f898-catalog-content\") pod \"redhat-operators-g5hgh\" (UID: \"58b9a349-dfe9-4cc9-851a-80c8bfc2f898\") " pod="openshift-marketplace/redhat-operators-g5hgh" Oct 14 13:19:52 crc kubenswrapper[4725]: I1014 13:19:52.043738 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58b9a349-dfe9-4cc9-851a-80c8bfc2f898-utilities\") pod \"redhat-operators-g5hgh\" (UID: \"58b9a349-dfe9-4cc9-851a-80c8bfc2f898\") " pod="openshift-marketplace/redhat-operators-g5hgh" Oct 14 13:19:52 crc kubenswrapper[4725]: I1014 13:19:52.066027 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48g9h\" (UniqueName: \"kubernetes.io/projected/58b9a349-dfe9-4cc9-851a-80c8bfc2f898-kube-api-access-48g9h\") pod \"redhat-operators-g5hgh\" (UID: \"58b9a349-dfe9-4cc9-851a-80c8bfc2f898\") " pod="openshift-marketplace/redhat-operators-g5hgh" Oct 14 13:19:52 crc kubenswrapper[4725]: I1014 13:19:52.186531 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g5hgh" Oct 14 13:19:52 crc kubenswrapper[4725]: I1014 13:19:52.224683 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xmq8l"] Oct 14 13:19:52 crc kubenswrapper[4725]: W1014 13:19:52.233180 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8154e8c_6473_4716_ba8a_b4141852b960.slice/crio-a28aa3a1616f6832f647700aa98ccb181053cddc8df2732bf36d66e7cf306205 WatchSource:0}: Error finding container a28aa3a1616f6832f647700aa98ccb181053cddc8df2732bf36d66e7cf306205: Status 404 returned error can't find the container with id a28aa3a1616f6832f647700aa98ccb181053cddc8df2732bf36d66e7cf306205 Oct 14 13:19:52 crc kubenswrapper[4725]: I1014 13:19:52.375948 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g5hgh"] Oct 14 13:19:52 crc kubenswrapper[4725]: W1014 13:19:52.442583 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58b9a349_dfe9_4cc9_851a_80c8bfc2f898.slice/crio-5b4a2ed5cb42839c4a2f9ff15176bc0ce2564fb7c4c83adf8904246bc07f319a WatchSource:0}: Error finding container 5b4a2ed5cb42839c4a2f9ff15176bc0ce2564fb7c4c83adf8904246bc07f319a: Status 404 returned error can't find the container with id 5b4a2ed5cb42839c4a2f9ff15176bc0ce2564fb7c4c83adf8904246bc07f319a Oct 14 13:19:52 crc kubenswrapper[4725]: I1014 13:19:52.755836 4725 generic.go:334] "Generic (PLEG): container finished" podID="58b9a349-dfe9-4cc9-851a-80c8bfc2f898" containerID="726aed0f574bd42ec3adb85f7ef0557aa5d6e0556693e09426f4b97e1226ca38" exitCode=0 Oct 14 13:19:52 crc kubenswrapper[4725]: I1014 13:19:52.755908 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g5hgh" event={"ID":"58b9a349-dfe9-4cc9-851a-80c8bfc2f898","Type":"ContainerDied","Data":"726aed0f574bd42ec3adb85f7ef0557aa5d6e0556693e09426f4b97e1226ca38"} Oct 14 13:19:52 crc kubenswrapper[4725]: I1014 13:19:52.755994 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g5hgh" event={"ID":"58b9a349-dfe9-4cc9-851a-80c8bfc2f898","Type":"ContainerStarted","Data":"5b4a2ed5cb42839c4a2f9ff15176bc0ce2564fb7c4c83adf8904246bc07f319a"} Oct 14 13:19:52 crc kubenswrapper[4725]: I1014 13:19:52.762857 4725 generic.go:334] "Generic (PLEG): container finished" podID="d8154e8c-6473-4716-ba8a-b4141852b960" containerID="2d5d768c5cb12a9af8f97272525e8e985784afc7fbbd58294b424b62caf0f620" exitCode=0 Oct 14 13:19:52 crc kubenswrapper[4725]: I1014 13:19:52.762956 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmq8l" event={"ID":"d8154e8c-6473-4716-ba8a-b4141852b960","Type":"ContainerDied","Data":"2d5d768c5cb12a9af8f97272525e8e985784afc7fbbd58294b424b62caf0f620"} Oct 14 13:19:52 crc kubenswrapper[4725]: I1014 13:19:52.763019 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmq8l" event={"ID":"d8154e8c-6473-4716-ba8a-b4141852b960","Type":"ContainerStarted","Data":"a28aa3a1616f6832f647700aa98ccb181053cddc8df2732bf36d66e7cf306205"} Oct 14 13:19:53 crc kubenswrapper[4725]: I1014 13:19:53.769870 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g5hgh" event={"ID":"58b9a349-dfe9-4cc9-851a-80c8bfc2f898","Type":"ContainerStarted","Data":"c4ff531e4d37300af7f2857640f499dddb0edeac06d1d0561112ce34f00ef89b"} Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.044352 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rjr78"] Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.045348 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rjr78" Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.047418 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.064378 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rjr78"] Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.179246 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08e081de-8c18-4fc2-8b9b-844352989e96-utilities\") pod \"certified-operators-rjr78\" (UID: \"08e081de-8c18-4fc2-8b9b-844352989e96\") " pod="openshift-marketplace/certified-operators-rjr78" Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.179335 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08e081de-8c18-4fc2-8b9b-844352989e96-catalog-content\") pod \"certified-operators-rjr78\" (UID: \"08e081de-8c18-4fc2-8b9b-844352989e96\") " pod="openshift-marketplace/certified-operators-rjr78" Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.179396 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxsm2\" (UniqueName: \"kubernetes.io/projected/08e081de-8c18-4fc2-8b9b-844352989e96-kube-api-access-cxsm2\") pod \"certified-operators-rjr78\" (UID: \"08e081de-8c18-4fc2-8b9b-844352989e96\") " pod="openshift-marketplace/certified-operators-rjr78" Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.243553 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-chnk6"] Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.244838 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-chnk6" Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.250969 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.277318 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-chnk6"] Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.280565 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxsm2\" (UniqueName: \"kubernetes.io/projected/08e081de-8c18-4fc2-8b9b-844352989e96-kube-api-access-cxsm2\") pod \"certified-operators-rjr78\" (UID: \"08e081de-8c18-4fc2-8b9b-844352989e96\") " pod="openshift-marketplace/certified-operators-rjr78" Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.280678 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08e081de-8c18-4fc2-8b9b-844352989e96-utilities\") pod \"certified-operators-rjr78\" (UID: \"08e081de-8c18-4fc2-8b9b-844352989e96\") " pod="openshift-marketplace/certified-operators-rjr78" Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.280716 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08e081de-8c18-4fc2-8b9b-844352989e96-catalog-content\") pod \"certified-operators-rjr78\" (UID: \"08e081de-8c18-4fc2-8b9b-844352989e96\") " pod="openshift-marketplace/certified-operators-rjr78" Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.281353 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08e081de-8c18-4fc2-8b9b-844352989e96-catalog-content\") pod \"certified-operators-rjr78\" (UID: \"08e081de-8c18-4fc2-8b9b-844352989e96\") " pod="openshift-marketplace/certified-operators-rjr78" Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.281530 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08e081de-8c18-4fc2-8b9b-844352989e96-utilities\") pod \"certified-operators-rjr78\" (UID: \"08e081de-8c18-4fc2-8b9b-844352989e96\") " pod="openshift-marketplace/certified-operators-rjr78" Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.304637 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxsm2\" (UniqueName: \"kubernetes.io/projected/08e081de-8c18-4fc2-8b9b-844352989e96-kube-api-access-cxsm2\") pod \"certified-operators-rjr78\" (UID: \"08e081de-8c18-4fc2-8b9b-844352989e96\") " pod="openshift-marketplace/certified-operators-rjr78" Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.382040 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9-utilities\") pod \"community-operators-chnk6\" (UID: \"48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9\") " pod="openshift-marketplace/community-operators-chnk6" Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.382125 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9-catalog-content\") pod \"community-operators-chnk6\" (UID: \"48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9\") " pod="openshift-marketplace/community-operators-chnk6" Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.382396 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfb6q\" (UniqueName: \"kubernetes.io/projected/48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9-kube-api-access-bfb6q\") pod \"community-operators-chnk6\" (UID: \"48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9\") " pod="openshift-marketplace/community-operators-chnk6" Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.474232 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rjr78" Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.484337 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9-utilities\") pod \"community-operators-chnk6\" (UID: \"48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9\") " pod="openshift-marketplace/community-operators-chnk6" Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.484406 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9-catalog-content\") pod \"community-operators-chnk6\" (UID: \"48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9\") " pod="openshift-marketplace/community-operators-chnk6" Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.484501 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfb6q\" (UniqueName: \"kubernetes.io/projected/48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9-kube-api-access-bfb6q\") pod \"community-operators-chnk6\" (UID: \"48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9\") " pod="openshift-marketplace/community-operators-chnk6" Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.485208 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9-utilities\") pod \"community-operators-chnk6\" (UID: \"48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9\") " pod="openshift-marketplace/community-operators-chnk6" Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.485408 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9-catalog-content\") pod \"community-operators-chnk6\" (UID: \"48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9\") " pod="openshift-marketplace/community-operators-chnk6" Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.503584 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfb6q\" (UniqueName: \"kubernetes.io/projected/48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9-kube-api-access-bfb6q\") pod \"community-operators-chnk6\" (UID: \"48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9\") " pod="openshift-marketplace/community-operators-chnk6" Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.564715 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-chnk6" Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.713097 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rjr78"] Oct 14 13:19:54 crc kubenswrapper[4725]: W1014 13:19:54.722577 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08e081de_8c18_4fc2_8b9b_844352989e96.slice/crio-f059ed29ec7b8d7387abda1a39754c83e09d198f701d4458ec9e2b60d7a06138 WatchSource:0}: Error finding container f059ed29ec7b8d7387abda1a39754c83e09d198f701d4458ec9e2b60d7a06138: Status 404 returned error can't find the container with id f059ed29ec7b8d7387abda1a39754c83e09d198f701d4458ec9e2b60d7a06138 Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.784840 4725 generic.go:334] "Generic (PLEG): container finished" podID="d8154e8c-6473-4716-ba8a-b4141852b960" containerID="7bd9ee4444f13aa274f836fa008f8d52fd456f4b3004d6d09e22c58eec4b988f" exitCode=0 Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.784945 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmq8l" event={"ID":"d8154e8c-6473-4716-ba8a-b4141852b960","Type":"ContainerDied","Data":"7bd9ee4444f13aa274f836fa008f8d52fd456f4b3004d6d09e22c58eec4b988f"} Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.793885 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rjr78" event={"ID":"08e081de-8c18-4fc2-8b9b-844352989e96","Type":"ContainerStarted","Data":"f059ed29ec7b8d7387abda1a39754c83e09d198f701d4458ec9e2b60d7a06138"} Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.796430 4725 generic.go:334] "Generic (PLEG): container finished" podID="58b9a349-dfe9-4cc9-851a-80c8bfc2f898" containerID="c4ff531e4d37300af7f2857640f499dddb0edeac06d1d0561112ce34f00ef89b" exitCode=0 Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.796493 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g5hgh" event={"ID":"58b9a349-dfe9-4cc9-851a-80c8bfc2f898","Type":"ContainerDied","Data":"c4ff531e4d37300af7f2857640f499dddb0edeac06d1d0561112ce34f00ef89b"} Oct 14 13:19:54 crc kubenswrapper[4725]: I1014 13:19:54.798389 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-chnk6"] Oct 14 13:19:55 crc kubenswrapper[4725]: I1014 13:19:55.809815 4725 generic.go:334] "Generic (PLEG): container finished" podID="08e081de-8c18-4fc2-8b9b-844352989e96" containerID="45c7d609f1ec3cd332cdaf49344e7479687ce80093baf411c3ef00959700ae44" exitCode=0 Oct 14 13:19:55 crc kubenswrapper[4725]: I1014 13:19:55.810122 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rjr78" event={"ID":"08e081de-8c18-4fc2-8b9b-844352989e96","Type":"ContainerDied","Data":"45c7d609f1ec3cd332cdaf49344e7479687ce80093baf411c3ef00959700ae44"} Oct 14 13:19:55 crc kubenswrapper[4725]: I1014 13:19:55.828750 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g5hgh" event={"ID":"58b9a349-dfe9-4cc9-851a-80c8bfc2f898","Type":"ContainerStarted","Data":"7779b300b4cfb49895bee51557c3a3b62dfd1574355dcd7721effd6d26ccb9ea"} Oct 14 13:19:55 crc kubenswrapper[4725]: I1014 13:19:55.832826 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmq8l" event={"ID":"d8154e8c-6473-4716-ba8a-b4141852b960","Type":"ContainerStarted","Data":"2add1efd4f34f501becd14ad3dea64b125650eef80e0bdc424402c0ab8f24e82"} Oct 14 13:19:55 crc kubenswrapper[4725]: I1014 13:19:55.840920 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chnk6" event={"ID":"48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9","Type":"ContainerDied","Data":"425aeeceb035d9858da6e502d136a2e8f58b63ac722ae7b75cbc47d80a6950a5"} Oct 14 13:19:55 crc kubenswrapper[4725]: I1014 13:19:55.840817 4725 generic.go:334] "Generic (PLEG): container finished" podID="48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9" containerID="425aeeceb035d9858da6e502d136a2e8f58b63ac722ae7b75cbc47d80a6950a5" exitCode=0 Oct 14 13:19:55 crc kubenswrapper[4725]: I1014 13:19:55.841594 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chnk6" event={"ID":"48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9","Type":"ContainerStarted","Data":"d737a75d40dfea8fe6cea074d15d94486b952d54e09a9aaf18b5400f60cffec2"} Oct 14 13:19:55 crc kubenswrapper[4725]: I1014 13:19:55.882246 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xmq8l" podStartSLOduration=2.401239187 podStartE2EDuration="4.882229253s" podCreationTimestamp="2025-10-14 13:19:51 +0000 UTC" firstStartedPulling="2025-10-14 13:19:52.764681229 +0000 UTC m=+309.613116038" lastFinishedPulling="2025-10-14 13:19:55.245671285 +0000 UTC m=+312.094106104" observedRunningTime="2025-10-14 13:19:55.881461831 +0000 UTC m=+312.729896650" watchObservedRunningTime="2025-10-14 13:19:55.882229253 +0000 UTC m=+312.730664062" Oct 14 13:19:55 crc kubenswrapper[4725]: I1014 13:19:55.899762 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g5hgh" podStartSLOduration=2.17791939 podStartE2EDuration="4.899733536s" podCreationTimestamp="2025-10-14 13:19:51 +0000 UTC" firstStartedPulling="2025-10-14 13:19:52.758707494 +0000 UTC m=+309.607142303" lastFinishedPulling="2025-10-14 13:19:55.48052163 +0000 UTC m=+312.328956449" observedRunningTime="2025-10-14 13:19:55.897973225 +0000 UTC m=+312.746408054" watchObservedRunningTime="2025-10-14 13:19:55.899733536 +0000 UTC m=+312.748168345" Oct 14 13:19:56 crc kubenswrapper[4725]: I1014 13:19:56.847803 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chnk6" event={"ID":"48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9","Type":"ContainerStarted","Data":"f048bb3b805ac4bfc2a277076f086a09331a8589684e68afb708740c63c56b92"} Oct 14 13:19:56 crc kubenswrapper[4725]: I1014 13:19:56.849707 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rjr78" event={"ID":"08e081de-8c18-4fc2-8b9b-844352989e96","Type":"ContainerStarted","Data":"62a8940a234a50f28e5d64123f7ff25ea3319f158a115524ccb71aa69de8c7a0"} Oct 14 13:19:57 crc kubenswrapper[4725]: E1014 13:19:57.158967 4725 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48c6e1e3_4851_4faf_bcb0_d12f5c73f6a9.slice/crio-conmon-f048bb3b805ac4bfc2a277076f086a09331a8589684e68afb708740c63c56b92.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48c6e1e3_4851_4faf_bcb0_d12f5c73f6a9.slice/crio-f048bb3b805ac4bfc2a277076f086a09331a8589684e68afb708740c63c56b92.scope\": RecentStats: unable to find data in memory cache]" Oct 14 13:19:57 crc kubenswrapper[4725]: I1014 13:19:57.855912 4725 generic.go:334] "Generic (PLEG): container finished" podID="48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9" containerID="f048bb3b805ac4bfc2a277076f086a09331a8589684e68afb708740c63c56b92" exitCode=0 Oct 14 13:19:57 crc kubenswrapper[4725]: I1014 13:19:57.857500 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chnk6" event={"ID":"48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9","Type":"ContainerDied","Data":"f048bb3b805ac4bfc2a277076f086a09331a8589684e68afb708740c63c56b92"} Oct 14 13:19:57 crc kubenswrapper[4725]: I1014 13:19:57.871833 4725 generic.go:334] "Generic (PLEG): container finished" podID="08e081de-8c18-4fc2-8b9b-844352989e96" containerID="62a8940a234a50f28e5d64123f7ff25ea3319f158a115524ccb71aa69de8c7a0" exitCode=0 Oct 14 13:19:57 crc kubenswrapper[4725]: I1014 13:19:57.871883 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rjr78" event={"ID":"08e081de-8c18-4fc2-8b9b-844352989e96","Type":"ContainerDied","Data":"62a8940a234a50f28e5d64123f7ff25ea3319f158a115524ccb71aa69de8c7a0"} Oct 14 13:19:59 crc kubenswrapper[4725]: I1014 13:19:59.886818 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rjr78" event={"ID":"08e081de-8c18-4fc2-8b9b-844352989e96","Type":"ContainerStarted","Data":"f46d70ab2b80722dd7b68884bdb69ad67e268cedea16f634e80931256ad8e9e3"} Oct 14 13:19:59 crc kubenswrapper[4725]: I1014 13:19:59.890498 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chnk6" event={"ID":"48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9","Type":"ContainerStarted","Data":"d3e13440cc171307f7e1d49b203708eef49e5372292c58596f0184dc790a8955"} Oct 14 13:19:59 crc kubenswrapper[4725]: I1014 13:19:59.914410 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rjr78" podStartSLOduration=3.296710217 podStartE2EDuration="5.914379706s" podCreationTimestamp="2025-10-14 13:19:54 +0000 UTC" firstStartedPulling="2025-10-14 13:19:55.818686947 +0000 UTC m=+312.667121756" lastFinishedPulling="2025-10-14 13:19:58.436356436 +0000 UTC m=+315.284791245" observedRunningTime="2025-10-14 13:19:59.909601317 +0000 UTC m=+316.758036146" watchObservedRunningTime="2025-10-14 13:19:59.914379706 +0000 UTC m=+316.762814515" Oct 14 13:19:59 crc kubenswrapper[4725]: I1014 13:19:59.937819 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-chnk6" podStartSLOduration=3.392522711 podStartE2EDuration="5.937786184s" podCreationTimestamp="2025-10-14 13:19:54 +0000 UTC" firstStartedPulling="2025-10-14 13:19:55.843050103 +0000 UTC m=+312.691484912" lastFinishedPulling="2025-10-14 13:19:58.388313576 +0000 UTC m=+315.236748385" observedRunningTime="2025-10-14 13:19:59.936102224 +0000 UTC m=+316.784537053" watchObservedRunningTime="2025-10-14 13:19:59.937786184 +0000 UTC m=+316.786220993" Oct 14 13:20:02 crc kubenswrapper[4725]: I1014 13:20:02.012398 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xmq8l" Oct 14 13:20:02 crc kubenswrapper[4725]: I1014 13:20:02.012559 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xmq8l" Oct 14 13:20:02 crc kubenswrapper[4725]: I1014 13:20:02.088475 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xmq8l" Oct 14 13:20:02 crc kubenswrapper[4725]: I1014 13:20:02.187889 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g5hgh" Oct 14 13:20:02 crc kubenswrapper[4725]: I1014 13:20:02.187938 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g5hgh" Oct 14 13:20:02 crc kubenswrapper[4725]: I1014 13:20:02.233855 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g5hgh" Oct 14 13:20:02 crc kubenswrapper[4725]: I1014 13:20:02.951545 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xmq8l" Oct 14 13:20:02 crc kubenswrapper[4725]: I1014 13:20:02.953940 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g5hgh" Oct 14 13:20:04 crc kubenswrapper[4725]: I1014 13:20:04.475412 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rjr78" Oct 14 13:20:04 crc kubenswrapper[4725]: I1014 13:20:04.475568 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rjr78" Oct 14 13:20:04 crc kubenswrapper[4725]: I1014 13:20:04.524051 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rjr78" Oct 14 13:20:04 crc kubenswrapper[4725]: I1014 13:20:04.565653 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-chnk6" Oct 14 13:20:04 crc kubenswrapper[4725]: I1014 13:20:04.565729 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-chnk6" Oct 14 13:20:04 crc kubenswrapper[4725]: I1014 13:20:04.614135 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-chnk6" Oct 14 13:20:04 crc kubenswrapper[4725]: I1014 13:20:04.970207 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-chnk6" Oct 14 13:20:04 crc kubenswrapper[4725]: I1014 13:20:04.980464 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rjr78" Oct 14 13:20:14 crc kubenswrapper[4725]: I1014 13:20:14.546813 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" podUID="960ecad3-135d-4478-bd6f-b37588dd49bb" containerName="oauth-openshift" containerID="cri-o://7bb03f40be61bdbaaedd6343cac6b2580e63c59d74271d8ecc7c5e26d5e4e3e3" gracePeriod=15 Oct 14 13:20:14 crc kubenswrapper[4725]: I1014 13:20:14.980191 4725 generic.go:334] "Generic (PLEG): container finished" podID="960ecad3-135d-4478-bd6f-b37588dd49bb" containerID="7bb03f40be61bdbaaedd6343cac6b2580e63c59d74271d8ecc7c5e26d5e4e3e3" exitCode=0 Oct 14 13:20:14 crc kubenswrapper[4725]: I1014 13:20:14.980248 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" event={"ID":"960ecad3-135d-4478-bd6f-b37588dd49bb","Type":"ContainerDied","Data":"7bb03f40be61bdbaaedd6343cac6b2580e63c59d74271d8ecc7c5e26d5e4e3e3"} Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.447444 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.481493 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-76fc545986-6rnx8"] Oct 14 13:20:15 crc kubenswrapper[4725]: E1014 13:20:15.481797 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="960ecad3-135d-4478-bd6f-b37588dd49bb" containerName="oauth-openshift" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.481819 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="960ecad3-135d-4478-bd6f-b37588dd49bb" containerName="oauth-openshift" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.481936 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="960ecad3-135d-4478-bd6f-b37588dd49bb" containerName="oauth-openshift" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.482542 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.504971 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-76fc545986-6rnx8"] Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.575120 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-user-template-provider-selection\") pod \"960ecad3-135d-4478-bd6f-b37588dd49bb\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.575193 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-user-template-error\") pod \"960ecad3-135d-4478-bd6f-b37588dd49bb\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.575274 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-user-idp-0-file-data\") pod \"960ecad3-135d-4478-bd6f-b37588dd49bb\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.575307 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69z7n\" (UniqueName: \"kubernetes.io/projected/960ecad3-135d-4478-bd6f-b37588dd49bb-kube-api-access-69z7n\") pod \"960ecad3-135d-4478-bd6f-b37588dd49bb\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.575355 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-user-template-login\") pod \"960ecad3-135d-4478-bd6f-b37588dd49bb\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.575416 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/960ecad3-135d-4478-bd6f-b37588dd49bb-audit-policies\") pod \"960ecad3-135d-4478-bd6f-b37588dd49bb\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.575509 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-router-certs\") pod \"960ecad3-135d-4478-bd6f-b37588dd49bb\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.575552 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-service-ca\") pod \"960ecad3-135d-4478-bd6f-b37588dd49bb\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.575597 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-trusted-ca-bundle\") pod \"960ecad3-135d-4478-bd6f-b37588dd49bb\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.575638 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-serving-cert\") pod \"960ecad3-135d-4478-bd6f-b37588dd49bb\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.575670 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/960ecad3-135d-4478-bd6f-b37588dd49bb-audit-dir\") pod \"960ecad3-135d-4478-bd6f-b37588dd49bb\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.575713 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-session\") pod \"960ecad3-135d-4478-bd6f-b37588dd49bb\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.575751 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-cliconfig\") pod \"960ecad3-135d-4478-bd6f-b37588dd49bb\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.575831 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-ocp-branding-template\") pod \"960ecad3-135d-4478-bd6f-b37588dd49bb\" (UID: \"960ecad3-135d-4478-bd6f-b37588dd49bb\") " Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.576032 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c52c862-20ea-4329-98bd-c37be9392f35-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.576026 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/960ecad3-135d-4478-bd6f-b37588dd49bb-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "960ecad3-135d-4478-bd6f-b37588dd49bb" (UID: "960ecad3-135d-4478-bd6f-b37588dd49bb"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.576084 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9c52c862-20ea-4329-98bd-c37be9392f35-v4-0-config-user-template-error\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.576674 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/960ecad3-135d-4478-bd6f-b37588dd49bb-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "960ecad3-135d-4478-bd6f-b37588dd49bb" (UID: "960ecad3-135d-4478-bd6f-b37588dd49bb"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.576884 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "960ecad3-135d-4478-bd6f-b37588dd49bb" (UID: "960ecad3-135d-4478-bd6f-b37588dd49bb"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.576905 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "960ecad3-135d-4478-bd6f-b37588dd49bb" (UID: "960ecad3-135d-4478-bd6f-b37588dd49bb"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.577154 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "960ecad3-135d-4478-bd6f-b37588dd49bb" (UID: "960ecad3-135d-4478-bd6f-b37588dd49bb"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.577243 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9c52c862-20ea-4329-98bd-c37be9392f35-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.577336 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9c52c862-20ea-4329-98bd-c37be9392f35-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.577440 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9c52c862-20ea-4329-98bd-c37be9392f35-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.577508 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9c52c862-20ea-4329-98bd-c37be9392f35-v4-0-config-system-session\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.577546 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9c52c862-20ea-4329-98bd-c37be9392f35-v4-0-config-user-template-login\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.577691 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9c52c862-20ea-4329-98bd-c37be9392f35-audit-policies\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.577735 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9c52c862-20ea-4329-98bd-c37be9392f35-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.577764 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9c52c862-20ea-4329-98bd-c37be9392f35-audit-dir\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.577796 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9c52c862-20ea-4329-98bd-c37be9392f35-v4-0-config-system-router-certs\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.577839 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c52c862-20ea-4329-98bd-c37be9392f35-v4-0-config-system-service-ca\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.577867 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wk7p\" (UniqueName: \"kubernetes.io/projected/9c52c862-20ea-4329-98bd-c37be9392f35-kube-api-access-9wk7p\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.577895 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c52c862-20ea-4329-98bd-c37be9392f35-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.577955 4725 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/960ecad3-135d-4478-bd6f-b37588dd49bb-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.577971 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.577986 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.578074 4725 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/960ecad3-135d-4478-bd6f-b37588dd49bb-audit-dir\") on node \"crc\" DevicePath \"\"" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.578145 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.583470 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "960ecad3-135d-4478-bd6f-b37588dd49bb" (UID: "960ecad3-135d-4478-bd6f-b37588dd49bb"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.583808 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "960ecad3-135d-4478-bd6f-b37588dd49bb" (UID: "960ecad3-135d-4478-bd6f-b37588dd49bb"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.583818 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "960ecad3-135d-4478-bd6f-b37588dd49bb" (UID: "960ecad3-135d-4478-bd6f-b37588dd49bb"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.583963 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/960ecad3-135d-4478-bd6f-b37588dd49bb-kube-api-access-69z7n" (OuterVolumeSpecName: "kube-api-access-69z7n") pod "960ecad3-135d-4478-bd6f-b37588dd49bb" (UID: "960ecad3-135d-4478-bd6f-b37588dd49bb"). InnerVolumeSpecName "kube-api-access-69z7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.584250 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "960ecad3-135d-4478-bd6f-b37588dd49bb" (UID: "960ecad3-135d-4478-bd6f-b37588dd49bb"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.584702 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "960ecad3-135d-4478-bd6f-b37588dd49bb" (UID: "960ecad3-135d-4478-bd6f-b37588dd49bb"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.584823 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "960ecad3-135d-4478-bd6f-b37588dd49bb" (UID: "960ecad3-135d-4478-bd6f-b37588dd49bb"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.585221 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "960ecad3-135d-4478-bd6f-b37588dd49bb" (UID: "960ecad3-135d-4478-bd6f-b37588dd49bb"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.585310 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "960ecad3-135d-4478-bd6f-b37588dd49bb" (UID: "960ecad3-135d-4478-bd6f-b37588dd49bb"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.679965 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9c52c862-20ea-4329-98bd-c37be9392f35-v4-0-config-system-session\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.680051 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9c52c862-20ea-4329-98bd-c37be9392f35-v4-0-config-user-template-login\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.680099 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9c52c862-20ea-4329-98bd-c37be9392f35-audit-policies\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.680127 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9c52c862-20ea-4329-98bd-c37be9392f35-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.680146 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9c52c862-20ea-4329-98bd-c37be9392f35-audit-dir\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.680171 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9c52c862-20ea-4329-98bd-c37be9392f35-v4-0-config-system-router-certs\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.680196 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c52c862-20ea-4329-98bd-c37be9392f35-v4-0-config-system-service-ca\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.680218 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wk7p\" (UniqueName: \"kubernetes.io/projected/9c52c862-20ea-4329-98bd-c37be9392f35-kube-api-access-9wk7p\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.680234 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c52c862-20ea-4329-98bd-c37be9392f35-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.680268 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c52c862-20ea-4329-98bd-c37be9392f35-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.680292 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9c52c862-20ea-4329-98bd-c37be9392f35-v4-0-config-user-template-error\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.680295 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9c52c862-20ea-4329-98bd-c37be9392f35-audit-dir\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.680320 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9c52c862-20ea-4329-98bd-c37be9392f35-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.680338 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9c52c862-20ea-4329-98bd-c37be9392f35-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.680360 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9c52c862-20ea-4329-98bd-c37be9392f35-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.680405 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.680417 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.680430 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.680442 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.680478 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69z7n\" (UniqueName: \"kubernetes.io/projected/960ecad3-135d-4478-bd6f-b37588dd49bb-kube-api-access-69z7n\") on node \"crc\" DevicePath \"\"" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.680490 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.680504 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.680519 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.680530 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/960ecad3-135d-4478-bd6f-b37588dd49bb-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.681613 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9c52c862-20ea-4329-98bd-c37be9392f35-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.682737 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9c52c862-20ea-4329-98bd-c37be9392f35-audit-policies\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.682753 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c52c862-20ea-4329-98bd-c37be9392f35-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.682941 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c52c862-20ea-4329-98bd-c37be9392f35-v4-0-config-system-service-ca\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.684503 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9c52c862-20ea-4329-98bd-c37be9392f35-v4-0-config-system-session\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.685067 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9c52c862-20ea-4329-98bd-c37be9392f35-v4-0-config-system-router-certs\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.687008 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9c52c862-20ea-4329-98bd-c37be9392f35-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.687084 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9c52c862-20ea-4329-98bd-c37be9392f35-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.687551 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9c52c862-20ea-4329-98bd-c37be9392f35-v4-0-config-user-template-login\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.687895 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c52c862-20ea-4329-98bd-c37be9392f35-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.691156 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9c52c862-20ea-4329-98bd-c37be9392f35-v4-0-config-user-template-error\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.697579 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9c52c862-20ea-4329-98bd-c37be9392f35-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.699849 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wk7p\" (UniqueName: \"kubernetes.io/projected/9c52c862-20ea-4329-98bd-c37be9392f35-kube-api-access-9wk7p\") pod \"oauth-openshift-76fc545986-6rnx8\" (UID: \"9c52c862-20ea-4329-98bd-c37be9392f35\") " pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.802519 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.989262 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" event={"ID":"960ecad3-135d-4478-bd6f-b37588dd49bb","Type":"ContainerDied","Data":"150a783bf5b4a045fbe996138fd897ae7152ad047a5d6ce55eb51a8329c262c4"} Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.989750 4725 scope.go:117] "RemoveContainer" containerID="7bb03f40be61bdbaaedd6343cac6b2580e63c59d74271d8ecc7c5e26d5e4e3e3" Oct 14 13:20:15 crc kubenswrapper[4725]: I1014 13:20:15.989335 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vq7zq" Oct 14 13:20:16 crc kubenswrapper[4725]: I1014 13:20:16.022663 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vq7zq"] Oct 14 13:20:16 crc kubenswrapper[4725]: I1014 13:20:16.025032 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vq7zq"] Oct 14 13:20:16 crc kubenswrapper[4725]: I1014 13:20:16.072809 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-76fc545986-6rnx8"] Oct 14 13:20:16 crc kubenswrapper[4725]: W1014 13:20:16.082203 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c52c862_20ea_4329_98bd_c37be9392f35.slice/crio-c878a48e85ee7534668a919ea6f7e303b0d1b86e008196bb36fb95332c8f87f0 WatchSource:0}: Error finding container c878a48e85ee7534668a919ea6f7e303b0d1b86e008196bb36fb95332c8f87f0: Status 404 returned error can't find the container with id c878a48e85ee7534668a919ea6f7e303b0d1b86e008196bb36fb95332c8f87f0 Oct 14 13:20:16 crc kubenswrapper[4725]: I1014 13:20:16.997726 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" event={"ID":"9c52c862-20ea-4329-98bd-c37be9392f35","Type":"ContainerStarted","Data":"e12058c597a56dbc645b278c7b7f960cf38e8c60d90ce1f75b65674079e9403f"} Oct 14 13:20:16 crc kubenswrapper[4725]: I1014 13:20:16.998245 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" event={"ID":"9c52c862-20ea-4329-98bd-c37be9392f35","Type":"ContainerStarted","Data":"c878a48e85ee7534668a919ea6f7e303b0d1b86e008196bb36fb95332c8f87f0"} Oct 14 13:20:16 crc kubenswrapper[4725]: I1014 13:20:16.998267 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:17 crc kubenswrapper[4725]: I1014 13:20:17.003960 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" Oct 14 13:20:17 crc kubenswrapper[4725]: I1014 13:20:17.037950 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-76fc545986-6rnx8" podStartSLOduration=28.037919869 podStartE2EDuration="28.037919869s" podCreationTimestamp="2025-10-14 13:19:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:20:17.029148544 +0000 UTC m=+333.877583373" watchObservedRunningTime="2025-10-14 13:20:17.037919869 +0000 UTC m=+333.886354678" Oct 14 13:20:17 crc kubenswrapper[4725]: I1014 13:20:17.929717 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="960ecad3-135d-4478-bd6f-b37588dd49bb" path="/var/lib/kubelet/pods/960ecad3-135d-4478-bd6f-b37588dd49bb/volumes" Oct 14 13:21:02 crc kubenswrapper[4725]: I1014 13:21:02.521106 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 13:21:02 crc kubenswrapper[4725]: I1014 13:21:02.521793 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 13:21:32 crc kubenswrapper[4725]: I1014 13:21:32.521224 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 13:21:32 crc kubenswrapper[4725]: I1014 13:21:32.521835 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 13:22:02 crc kubenswrapper[4725]: I1014 13:22:02.520790 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 13:22:02 crc kubenswrapper[4725]: I1014 13:22:02.521284 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 13:22:02 crc kubenswrapper[4725]: I1014 13:22:02.521331 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" Oct 14 13:22:02 crc kubenswrapper[4725]: I1014 13:22:02.521852 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b08205f9a4a675b919bf3d7c8faa16184d5d3774c47efd0ec1554cceaf138624"} pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 13:22:02 crc kubenswrapper[4725]: I1014 13:22:02.521913 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" containerID="cri-o://b08205f9a4a675b919bf3d7c8faa16184d5d3774c47efd0ec1554cceaf138624" gracePeriod=600 Oct 14 13:22:03 crc kubenswrapper[4725]: I1014 13:22:03.644032 4725 generic.go:334] "Generic (PLEG): container finished" podID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerID="b08205f9a4a675b919bf3d7c8faa16184d5d3774c47efd0ec1554cceaf138624" exitCode=0 Oct 14 13:22:03 crc kubenswrapper[4725]: I1014 13:22:03.644201 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" event={"ID":"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c","Type":"ContainerDied","Data":"b08205f9a4a675b919bf3d7c8faa16184d5d3774c47efd0ec1554cceaf138624"} Oct 14 13:22:03 crc kubenswrapper[4725]: I1014 13:22:03.645016 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" event={"ID":"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c","Type":"ContainerStarted","Data":"901725d1f013ca80344ace1de79ac7ae086e040091353bbd5f5c2d54a02abb43"} Oct 14 13:22:03 crc kubenswrapper[4725]: I1014 13:22:03.645051 4725 scope.go:117] "RemoveContainer" containerID="6cc50d2bb6d8b894c9524b2557eda22f857a08c286a90a95ab4be265b524020c" Oct 14 13:22:37 crc kubenswrapper[4725]: I1014 13:22:37.411650 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-glls8"] Oct 14 13:22:37 crc kubenswrapper[4725]: I1014 13:22:37.412938 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-glls8" Oct 14 13:22:37 crc kubenswrapper[4725]: I1014 13:22:37.423917 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-glls8"] Oct 14 13:22:37 crc kubenswrapper[4725]: I1014 13:22:37.583330 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/191a7362-a138-4c9b-a2e8-341492ebd57d-trusted-ca\") pod \"image-registry-66df7c8f76-glls8\" (UID: \"191a7362-a138-4c9b-a2e8-341492ebd57d\") " pod="openshift-image-registry/image-registry-66df7c8f76-glls8" Oct 14 13:22:37 crc kubenswrapper[4725]: I1014 13:22:37.583395 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qt2j\" (UniqueName: \"kubernetes.io/projected/191a7362-a138-4c9b-a2e8-341492ebd57d-kube-api-access-7qt2j\") pod \"image-registry-66df7c8f76-glls8\" (UID: \"191a7362-a138-4c9b-a2e8-341492ebd57d\") " pod="openshift-image-registry/image-registry-66df7c8f76-glls8" Oct 14 13:22:37 crc kubenswrapper[4725]: I1014 13:22:37.583623 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/191a7362-a138-4c9b-a2e8-341492ebd57d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-glls8\" (UID: \"191a7362-a138-4c9b-a2e8-341492ebd57d\") " pod="openshift-image-registry/image-registry-66df7c8f76-glls8" Oct 14 13:22:37 crc kubenswrapper[4725]: I1014 13:22:37.583756 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-glls8\" (UID: \"191a7362-a138-4c9b-a2e8-341492ebd57d\") " pod="openshift-image-registry/image-registry-66df7c8f76-glls8" Oct 14 13:22:37 crc kubenswrapper[4725]: I1014 13:22:37.583834 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/191a7362-a138-4c9b-a2e8-341492ebd57d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-glls8\" (UID: \"191a7362-a138-4c9b-a2e8-341492ebd57d\") " pod="openshift-image-registry/image-registry-66df7c8f76-glls8" Oct 14 13:22:37 crc kubenswrapper[4725]: I1014 13:22:37.583872 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/191a7362-a138-4c9b-a2e8-341492ebd57d-registry-certificates\") pod \"image-registry-66df7c8f76-glls8\" (UID: \"191a7362-a138-4c9b-a2e8-341492ebd57d\") " pod="openshift-image-registry/image-registry-66df7c8f76-glls8" Oct 14 13:22:37 crc kubenswrapper[4725]: I1014 13:22:37.584010 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/191a7362-a138-4c9b-a2e8-341492ebd57d-bound-sa-token\") pod \"image-registry-66df7c8f76-glls8\" (UID: \"191a7362-a138-4c9b-a2e8-341492ebd57d\") " pod="openshift-image-registry/image-registry-66df7c8f76-glls8" Oct 14 13:22:37 crc kubenswrapper[4725]: I1014 13:22:37.584058 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/191a7362-a138-4c9b-a2e8-341492ebd57d-registry-tls\") pod \"image-registry-66df7c8f76-glls8\" (UID: \"191a7362-a138-4c9b-a2e8-341492ebd57d\") " pod="openshift-image-registry/image-registry-66df7c8f76-glls8" Oct 14 13:22:37 crc kubenswrapper[4725]: I1014 13:22:37.606070 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-glls8\" (UID: \"191a7362-a138-4c9b-a2e8-341492ebd57d\") " pod="openshift-image-registry/image-registry-66df7c8f76-glls8" Oct 14 13:22:37 crc kubenswrapper[4725]: I1014 13:22:37.685643 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/191a7362-a138-4c9b-a2e8-341492ebd57d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-glls8\" (UID: \"191a7362-a138-4c9b-a2e8-341492ebd57d\") " pod="openshift-image-registry/image-registry-66df7c8f76-glls8" Oct 14 13:22:37 crc kubenswrapper[4725]: I1014 13:22:37.685726 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/191a7362-a138-4c9b-a2e8-341492ebd57d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-glls8\" (UID: \"191a7362-a138-4c9b-a2e8-341492ebd57d\") " pod="openshift-image-registry/image-registry-66df7c8f76-glls8" Oct 14 13:22:37 crc kubenswrapper[4725]: I1014 13:22:37.685755 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/191a7362-a138-4c9b-a2e8-341492ebd57d-registry-certificates\") pod \"image-registry-66df7c8f76-glls8\" (UID: \"191a7362-a138-4c9b-a2e8-341492ebd57d\") " pod="openshift-image-registry/image-registry-66df7c8f76-glls8" Oct 14 13:22:37 crc kubenswrapper[4725]: I1014 13:22:37.685798 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/191a7362-a138-4c9b-a2e8-341492ebd57d-bound-sa-token\") pod \"image-registry-66df7c8f76-glls8\" (UID: \"191a7362-a138-4c9b-a2e8-341492ebd57d\") " pod="openshift-image-registry/image-registry-66df7c8f76-glls8" Oct 14 13:22:37 crc kubenswrapper[4725]: I1014 13:22:37.685822 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/191a7362-a138-4c9b-a2e8-341492ebd57d-registry-tls\") pod \"image-registry-66df7c8f76-glls8\" (UID: \"191a7362-a138-4c9b-a2e8-341492ebd57d\") " pod="openshift-image-registry/image-registry-66df7c8f76-glls8" Oct 14 13:22:37 crc kubenswrapper[4725]: I1014 13:22:37.686273 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/191a7362-a138-4c9b-a2e8-341492ebd57d-trusted-ca\") pod \"image-registry-66df7c8f76-glls8\" (UID: \"191a7362-a138-4c9b-a2e8-341492ebd57d\") " pod="openshift-image-registry/image-registry-66df7c8f76-glls8" Oct 14 13:22:37 crc kubenswrapper[4725]: I1014 13:22:37.686317 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qt2j\" (UniqueName: \"kubernetes.io/projected/191a7362-a138-4c9b-a2e8-341492ebd57d-kube-api-access-7qt2j\") pod \"image-registry-66df7c8f76-glls8\" (UID: \"191a7362-a138-4c9b-a2e8-341492ebd57d\") " pod="openshift-image-registry/image-registry-66df7c8f76-glls8" Oct 14 13:22:37 crc kubenswrapper[4725]: I1014 13:22:37.686381 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/191a7362-a138-4c9b-a2e8-341492ebd57d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-glls8\" (UID: \"191a7362-a138-4c9b-a2e8-341492ebd57d\") " pod="openshift-image-registry/image-registry-66df7c8f76-glls8" Oct 14 13:22:37 crc kubenswrapper[4725]: I1014 13:22:37.687275 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/191a7362-a138-4c9b-a2e8-341492ebd57d-registry-certificates\") pod \"image-registry-66df7c8f76-glls8\" (UID: \"191a7362-a138-4c9b-a2e8-341492ebd57d\") " pod="openshift-image-registry/image-registry-66df7c8f76-glls8" Oct 14 13:22:37 crc kubenswrapper[4725]: I1014 13:22:37.689378 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/191a7362-a138-4c9b-a2e8-341492ebd57d-trusted-ca\") pod \"image-registry-66df7c8f76-glls8\" (UID: \"191a7362-a138-4c9b-a2e8-341492ebd57d\") " pod="openshift-image-registry/image-registry-66df7c8f76-glls8" Oct 14 13:22:37 crc kubenswrapper[4725]: I1014 13:22:37.700688 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/191a7362-a138-4c9b-a2e8-341492ebd57d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-glls8\" (UID: \"191a7362-a138-4c9b-a2e8-341492ebd57d\") " pod="openshift-image-registry/image-registry-66df7c8f76-glls8" Oct 14 13:22:37 crc kubenswrapper[4725]: I1014 13:22:37.701049 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/191a7362-a138-4c9b-a2e8-341492ebd57d-registry-tls\") pod \"image-registry-66df7c8f76-glls8\" (UID: \"191a7362-a138-4c9b-a2e8-341492ebd57d\") " pod="openshift-image-registry/image-registry-66df7c8f76-glls8" Oct 14 13:22:37 crc kubenswrapper[4725]: I1014 13:22:37.703263 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/191a7362-a138-4c9b-a2e8-341492ebd57d-bound-sa-token\") pod \"image-registry-66df7c8f76-glls8\" (UID: \"191a7362-a138-4c9b-a2e8-341492ebd57d\") " pod="openshift-image-registry/image-registry-66df7c8f76-glls8" Oct 14 13:22:37 crc kubenswrapper[4725]: I1014 13:22:37.704593 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qt2j\" (UniqueName: \"kubernetes.io/projected/191a7362-a138-4c9b-a2e8-341492ebd57d-kube-api-access-7qt2j\") pod \"image-registry-66df7c8f76-glls8\" (UID: \"191a7362-a138-4c9b-a2e8-341492ebd57d\") " pod="openshift-image-registry/image-registry-66df7c8f76-glls8" Oct 14 13:22:37 crc kubenswrapper[4725]: I1014 13:22:37.728546 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-glls8" Oct 14 13:22:38 crc kubenswrapper[4725]: I1014 13:22:38.129562 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-glls8"] Oct 14 13:22:38 crc kubenswrapper[4725]: I1014 13:22:38.858760 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-glls8" event={"ID":"191a7362-a138-4c9b-a2e8-341492ebd57d","Type":"ContainerStarted","Data":"f65eff5d1490fe5aba9fe6d735a21317fd6e0d250879e54034c0607136dce8db"} Oct 14 13:22:38 crc kubenswrapper[4725]: I1014 13:22:38.859144 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-glls8" event={"ID":"191a7362-a138-4c9b-a2e8-341492ebd57d","Type":"ContainerStarted","Data":"98b49e45026446f6901d7a12ccc3853a9bbfad25d0aaab66d61bd7dbf3a259fb"} Oct 14 13:22:38 crc kubenswrapper[4725]: I1014 13:22:38.859180 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-glls8" Oct 14 13:22:38 crc kubenswrapper[4725]: I1014 13:22:38.884967 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-glls8" podStartSLOduration=1.884948037 podStartE2EDuration="1.884948037s" podCreationTimestamp="2025-10-14 13:22:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:22:38.883058074 +0000 UTC m=+475.731492903" watchObservedRunningTime="2025-10-14 13:22:38.884948037 +0000 UTC m=+475.733382846" Oct 14 13:22:57 crc kubenswrapper[4725]: I1014 13:22:57.735029 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-glls8" Oct 14 13:22:57 crc kubenswrapper[4725]: I1014 13:22:57.802134 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-96hd4"] Oct 14 13:23:22 crc kubenswrapper[4725]: I1014 13:23:22.846995 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" podUID="c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3" containerName="registry" containerID="cri-o://4bcba93452e78d8e294d0f0d77ab15a6fbce67060187ef59b8b4c92498e1a2fa" gracePeriod=30 Oct 14 13:23:23 crc kubenswrapper[4725]: I1014 13:23:23.137210 4725 generic.go:334] "Generic (PLEG): container finished" podID="c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3" containerID="4bcba93452e78d8e294d0f0d77ab15a6fbce67060187ef59b8b4c92498e1a2fa" exitCode=0 Oct 14 13:23:23 crc kubenswrapper[4725]: I1014 13:23:23.137419 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" event={"ID":"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3","Type":"ContainerDied","Data":"4bcba93452e78d8e294d0f0d77ab15a6fbce67060187ef59b8b4c92498e1a2fa"} Oct 14 13:23:23 crc kubenswrapper[4725]: I1014 13:23:23.221413 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:23:23 crc kubenswrapper[4725]: I1014 13:23:23.340140 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm47q\" (UniqueName: \"kubernetes.io/projected/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-kube-api-access-jm47q\") pod \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " Oct 14 13:23:23 crc kubenswrapper[4725]: I1014 13:23:23.340409 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " Oct 14 13:23:23 crc kubenswrapper[4725]: I1014 13:23:23.340542 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-installation-pull-secrets\") pod \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " Oct 14 13:23:23 crc kubenswrapper[4725]: I1014 13:23:23.340589 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-trusted-ca\") pod \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " Oct 14 13:23:23 crc kubenswrapper[4725]: I1014 13:23:23.340643 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-registry-certificates\") pod \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " Oct 14 13:23:23 crc kubenswrapper[4725]: I1014 13:23:23.340715 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-ca-trust-extracted\") pod \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " Oct 14 13:23:23 crc kubenswrapper[4725]: I1014 13:23:23.340766 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-registry-tls\") pod \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " Oct 14 13:23:23 crc kubenswrapper[4725]: I1014 13:23:23.340814 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-bound-sa-token\") pod \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\" (UID: \"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3\") " Oct 14 13:23:23 crc kubenswrapper[4725]: I1014 13:23:23.342108 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:23:23 crc kubenswrapper[4725]: I1014 13:23:23.342178 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:23:23 crc kubenswrapper[4725]: I1014 13:23:23.349221 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:23:23 crc kubenswrapper[4725]: I1014 13:23:23.349480 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:23:23 crc kubenswrapper[4725]: I1014 13:23:23.349562 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-kube-api-access-jm47q" (OuterVolumeSpecName: "kube-api-access-jm47q") pod "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3"). InnerVolumeSpecName "kube-api-access-jm47q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:23:23 crc kubenswrapper[4725]: I1014 13:23:23.349968 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:23:23 crc kubenswrapper[4725]: I1014 13:23:23.355891 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 14 13:23:23 crc kubenswrapper[4725]: I1014 13:23:23.372742 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3" (UID: "c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:23:23 crc kubenswrapper[4725]: I1014 13:23:23.442985 4725 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 14 13:23:23 crc kubenswrapper[4725]: I1014 13:23:23.443058 4725 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 14 13:23:23 crc kubenswrapper[4725]: I1014 13:23:23.443085 4725 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 14 13:23:23 crc kubenswrapper[4725]: I1014 13:23:23.443113 4725 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 14 13:23:23 crc kubenswrapper[4725]: I1014 13:23:23.443138 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jm47q\" (UniqueName: \"kubernetes.io/projected/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-kube-api-access-jm47q\") on node \"crc\" DevicePath \"\"" Oct 14 13:23:23 crc kubenswrapper[4725]: I1014 13:23:23.443162 4725 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 14 13:23:23 crc kubenswrapper[4725]: I1014 13:23:23.443189 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 14 13:23:24 crc kubenswrapper[4725]: I1014 13:23:24.144539 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" event={"ID":"c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3","Type":"ContainerDied","Data":"310f5a209ae1d5ef1379ffa93113145d5368934df740c6190be135120d919648"} Oct 14 13:23:24 crc kubenswrapper[4725]: I1014 13:23:24.144600 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-96hd4" Oct 14 13:23:24 crc kubenswrapper[4725]: I1014 13:23:24.144833 4725 scope.go:117] "RemoveContainer" containerID="4bcba93452e78d8e294d0f0d77ab15a6fbce67060187ef59b8b4c92498e1a2fa" Oct 14 13:23:24 crc kubenswrapper[4725]: I1014 13:23:24.166505 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-96hd4"] Oct 14 13:23:24 crc kubenswrapper[4725]: I1014 13:23:24.172277 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-96hd4"] Oct 14 13:23:25 crc kubenswrapper[4725]: I1014 13:23:25.930186 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3" path="/var/lib/kubelet/pods/c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3/volumes" Oct 14 13:24:02 crc kubenswrapper[4725]: I1014 13:24:02.520421 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 13:24:02 crc kubenswrapper[4725]: I1014 13:24:02.521009 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 13:24:32 crc kubenswrapper[4725]: I1014 13:24:32.521140 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 13:24:32 crc kubenswrapper[4725]: I1014 13:24:32.521849 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 13:25:02 crc kubenswrapper[4725]: I1014 13:25:02.521115 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 13:25:02 crc kubenswrapper[4725]: I1014 13:25:02.521893 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 13:25:02 crc kubenswrapper[4725]: I1014 13:25:02.521961 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" Oct 14 13:25:02 crc kubenswrapper[4725]: I1014 13:25:02.522621 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"901725d1f013ca80344ace1de79ac7ae086e040091353bbd5f5c2d54a02abb43"} pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 13:25:02 crc kubenswrapper[4725]: I1014 13:25:02.522687 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" containerID="cri-o://901725d1f013ca80344ace1de79ac7ae086e040091353bbd5f5c2d54a02abb43" gracePeriod=600 Oct 14 13:25:02 crc kubenswrapper[4725]: I1014 13:25:02.769090 4725 generic.go:334] "Generic (PLEG): container finished" podID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerID="901725d1f013ca80344ace1de79ac7ae086e040091353bbd5f5c2d54a02abb43" exitCode=0 Oct 14 13:25:02 crc kubenswrapper[4725]: I1014 13:25:02.769149 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" event={"ID":"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c","Type":"ContainerDied","Data":"901725d1f013ca80344ace1de79ac7ae086e040091353bbd5f5c2d54a02abb43"} Oct 14 13:25:02 crc kubenswrapper[4725]: I1014 13:25:02.769208 4725 scope.go:117] "RemoveContainer" containerID="b08205f9a4a675b919bf3d7c8faa16184d5d3774c47efd0ec1554cceaf138624" Oct 14 13:25:03 crc kubenswrapper[4725]: I1014 13:25:03.780892 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" event={"ID":"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c","Type":"ContainerStarted","Data":"8af3cd49f53b953cdb857f98f2a4e4ef4b83977a9a2d06c5f02fcc6cd95add47"} Oct 14 13:25:41 crc kubenswrapper[4725]: I1014 13:25:41.723017 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-kpptr"] Oct 14 13:25:41 crc kubenswrapper[4725]: E1014 13:25:41.723804 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3" containerName="registry" Oct 14 13:25:41 crc kubenswrapper[4725]: I1014 13:25:41.723823 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3" containerName="registry" Oct 14 13:25:41 crc kubenswrapper[4725]: I1014 13:25:41.723947 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7ec7d62-80ef-4c08-9927-a66bc3ee6cb3" containerName="registry" Oct 14 13:25:41 crc kubenswrapper[4725]: I1014 13:25:41.727025 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-s9g2p"] Oct 14 13:25:41 crc kubenswrapper[4725]: I1014 13:25:41.727171 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-kpptr" Oct 14 13:25:41 crc kubenswrapper[4725]: I1014 13:25:41.727782 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-s9g2p" Oct 14 13:25:41 crc kubenswrapper[4725]: I1014 13:25:41.731048 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Oct 14 13:25:41 crc kubenswrapper[4725]: I1014 13:25:41.733057 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Oct 14 13:25:41 crc kubenswrapper[4725]: I1014 13:25:41.735140 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-kpptr"] Oct 14 13:25:41 crc kubenswrapper[4725]: I1014 13:25:41.736170 4725 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-8npk2" Oct 14 13:25:41 crc kubenswrapper[4725]: I1014 13:25:41.738419 4725 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-2gzlg" Oct 14 13:25:41 crc kubenswrapper[4725]: I1014 13:25:41.749774 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-jjlbv"] Oct 14 13:25:41 crc kubenswrapper[4725]: I1014 13:25:41.750421 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-jjlbv" Oct 14 13:25:41 crc kubenswrapper[4725]: I1014 13:25:41.754400 4725 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-lcf9h" Oct 14 13:25:41 crc kubenswrapper[4725]: I1014 13:25:41.765563 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-s9g2p"] Oct 14 13:25:41 crc kubenswrapper[4725]: I1014 13:25:41.779627 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-jjlbv"] Oct 14 13:25:41 crc kubenswrapper[4725]: I1014 13:25:41.845061 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgxtg\" (UniqueName: \"kubernetes.io/projected/f6145f04-9a33-4d9b-9158-7f6fd9bf38d3-kube-api-access-hgxtg\") pod \"cert-manager-cainjector-7f985d654d-s9g2p\" (UID: \"f6145f04-9a33-4d9b-9158-7f6fd9bf38d3\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-s9g2p" Oct 14 13:25:41 crc kubenswrapper[4725]: I1014 13:25:41.845152 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98cg4\" (UniqueName: \"kubernetes.io/projected/39534dc6-c413-407f-a4d4-1d129d0dcdf3-kube-api-access-98cg4\") pod \"cert-manager-5b446d88c5-kpptr\" (UID: \"39534dc6-c413-407f-a4d4-1d129d0dcdf3\") " pod="cert-manager/cert-manager-5b446d88c5-kpptr" Oct 14 13:25:41 crc kubenswrapper[4725]: I1014 13:25:41.845176 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjblr\" (UniqueName: \"kubernetes.io/projected/65c2f894-94a7-4e13-b4f9-16bc3a11921b-kube-api-access-mjblr\") pod \"cert-manager-webhook-5655c58dd6-jjlbv\" (UID: \"65c2f894-94a7-4e13-b4f9-16bc3a11921b\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-jjlbv" Oct 14 13:25:41 crc kubenswrapper[4725]: I1014 13:25:41.946777 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjblr\" (UniqueName: \"kubernetes.io/projected/65c2f894-94a7-4e13-b4f9-16bc3a11921b-kube-api-access-mjblr\") pod \"cert-manager-webhook-5655c58dd6-jjlbv\" (UID: \"65c2f894-94a7-4e13-b4f9-16bc3a11921b\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-jjlbv" Oct 14 13:25:41 crc kubenswrapper[4725]: I1014 13:25:41.946849 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgxtg\" (UniqueName: \"kubernetes.io/projected/f6145f04-9a33-4d9b-9158-7f6fd9bf38d3-kube-api-access-hgxtg\") pod \"cert-manager-cainjector-7f985d654d-s9g2p\" (UID: \"f6145f04-9a33-4d9b-9158-7f6fd9bf38d3\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-s9g2p" Oct 14 13:25:41 crc kubenswrapper[4725]: I1014 13:25:41.946932 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98cg4\" (UniqueName: \"kubernetes.io/projected/39534dc6-c413-407f-a4d4-1d129d0dcdf3-kube-api-access-98cg4\") pod \"cert-manager-5b446d88c5-kpptr\" (UID: \"39534dc6-c413-407f-a4d4-1d129d0dcdf3\") " pod="cert-manager/cert-manager-5b446d88c5-kpptr" Oct 14 13:25:41 crc kubenswrapper[4725]: I1014 13:25:41.965655 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98cg4\" (UniqueName: \"kubernetes.io/projected/39534dc6-c413-407f-a4d4-1d129d0dcdf3-kube-api-access-98cg4\") pod \"cert-manager-5b446d88c5-kpptr\" (UID: \"39534dc6-c413-407f-a4d4-1d129d0dcdf3\") " pod="cert-manager/cert-manager-5b446d88c5-kpptr" Oct 14 13:25:41 crc kubenswrapper[4725]: I1014 13:25:41.965699 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgxtg\" (UniqueName: \"kubernetes.io/projected/f6145f04-9a33-4d9b-9158-7f6fd9bf38d3-kube-api-access-hgxtg\") pod \"cert-manager-cainjector-7f985d654d-s9g2p\" (UID: \"f6145f04-9a33-4d9b-9158-7f6fd9bf38d3\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-s9g2p" Oct 14 13:25:41 crc kubenswrapper[4725]: I1014 13:25:41.970182 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjblr\" (UniqueName: \"kubernetes.io/projected/65c2f894-94a7-4e13-b4f9-16bc3a11921b-kube-api-access-mjblr\") pod \"cert-manager-webhook-5655c58dd6-jjlbv\" (UID: \"65c2f894-94a7-4e13-b4f9-16bc3a11921b\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-jjlbv" Oct 14 13:25:42 crc kubenswrapper[4725]: I1014 13:25:42.055042 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-kpptr" Oct 14 13:25:42 crc kubenswrapper[4725]: I1014 13:25:42.078389 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-s9g2p" Oct 14 13:25:42 crc kubenswrapper[4725]: I1014 13:25:42.087344 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-jjlbv" Oct 14 13:25:42 crc kubenswrapper[4725]: I1014 13:25:42.323079 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-jjlbv"] Oct 14 13:25:42 crc kubenswrapper[4725]: I1014 13:25:42.339193 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 13:25:42 crc kubenswrapper[4725]: I1014 13:25:42.479069 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-kpptr"] Oct 14 13:25:42 crc kubenswrapper[4725]: I1014 13:25:42.481824 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-s9g2p"] Oct 14 13:25:42 crc kubenswrapper[4725]: W1014 13:25:42.485952 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39534dc6_c413_407f_a4d4_1d129d0dcdf3.slice/crio-47bab7fc4cd7591c70a3ea495747086aa55da777e67e6e39ffe4becbb785a7fa WatchSource:0}: Error finding container 47bab7fc4cd7591c70a3ea495747086aa55da777e67e6e39ffe4becbb785a7fa: Status 404 returned error can't find the container with id 47bab7fc4cd7591c70a3ea495747086aa55da777e67e6e39ffe4becbb785a7fa Oct 14 13:25:42 crc kubenswrapper[4725]: W1014 13:25:42.486264 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6145f04_9a33_4d9b_9158_7f6fd9bf38d3.slice/crio-75e32b13964c5b480c7089030dcafbb3afbf3357e0ef9d6c981e9354b61826e5 WatchSource:0}: Error finding container 75e32b13964c5b480c7089030dcafbb3afbf3357e0ef9d6c981e9354b61826e5: Status 404 returned error can't find the container with id 75e32b13964c5b480c7089030dcafbb3afbf3357e0ef9d6c981e9354b61826e5 Oct 14 13:25:43 crc kubenswrapper[4725]: I1014 13:25:43.017101 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-jjlbv" event={"ID":"65c2f894-94a7-4e13-b4f9-16bc3a11921b","Type":"ContainerStarted","Data":"f63024d4517dfb91deead4bd9f2c97bff0ae740f89335cd4481a46eecc66d4de"} Oct 14 13:25:43 crc kubenswrapper[4725]: I1014 13:25:43.018635 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-kpptr" event={"ID":"39534dc6-c413-407f-a4d4-1d129d0dcdf3","Type":"ContainerStarted","Data":"47bab7fc4cd7591c70a3ea495747086aa55da777e67e6e39ffe4becbb785a7fa"} Oct 14 13:25:43 crc kubenswrapper[4725]: I1014 13:25:43.019824 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-s9g2p" event={"ID":"f6145f04-9a33-4d9b-9158-7f6fd9bf38d3","Type":"ContainerStarted","Data":"75e32b13964c5b480c7089030dcafbb3afbf3357e0ef9d6c981e9354b61826e5"} Oct 14 13:25:45 crc kubenswrapper[4725]: I1014 13:25:45.032784 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-jjlbv" event={"ID":"65c2f894-94a7-4e13-b4f9-16bc3a11921b","Type":"ContainerStarted","Data":"975633bce32944b940556e1c48685e02c9587361bd45586d3b254b13f0611d0e"} Oct 14 13:25:45 crc kubenswrapper[4725]: I1014 13:25:45.033733 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-jjlbv" Oct 14 13:25:45 crc kubenswrapper[4725]: I1014 13:25:45.050966 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-jjlbv" podStartSLOduration=1.879004486 podStartE2EDuration="4.050945984s" podCreationTimestamp="2025-10-14 13:25:41 +0000 UTC" firstStartedPulling="2025-10-14 13:25:42.338936096 +0000 UTC m=+659.187370905" lastFinishedPulling="2025-10-14 13:25:44.510877584 +0000 UTC m=+661.359312403" observedRunningTime="2025-10-14 13:25:45.04828088 +0000 UTC m=+661.896715689" watchObservedRunningTime="2025-10-14 13:25:45.050945984 +0000 UTC m=+661.899380803" Oct 14 13:25:46 crc kubenswrapper[4725]: I1014 13:25:46.040620 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-kpptr" event={"ID":"39534dc6-c413-407f-a4d4-1d129d0dcdf3","Type":"ContainerStarted","Data":"cdb972e73b53e4a9adde5807c3f2d843f1f1abb6f39de41647af2ca4cd94b4b1"} Oct 14 13:25:46 crc kubenswrapper[4725]: I1014 13:25:46.043156 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-s9g2p" event={"ID":"f6145f04-9a33-4d9b-9158-7f6fd9bf38d3","Type":"ContainerStarted","Data":"ca2fb32a2ba755e5ba6611fd3d2cc506857ea71db173c923d28eec358450d7ab"} Oct 14 13:25:46 crc kubenswrapper[4725]: I1014 13:25:46.055856 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-kpptr" podStartSLOduration=1.897462832 podStartE2EDuration="5.055842614s" podCreationTimestamp="2025-10-14 13:25:41 +0000 UTC" firstStartedPulling="2025-10-14 13:25:42.488694951 +0000 UTC m=+659.337129760" lastFinishedPulling="2025-10-14 13:25:45.647074703 +0000 UTC m=+662.495509542" observedRunningTime="2025-10-14 13:25:46.053422138 +0000 UTC m=+662.901856937" watchObservedRunningTime="2025-10-14 13:25:46.055842614 +0000 UTC m=+662.904277423" Oct 14 13:25:52 crc kubenswrapper[4725]: I1014 13:25:52.090398 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-jjlbv" Oct 14 13:25:52 crc kubenswrapper[4725]: I1014 13:25:52.106814 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-s9g2p" podStartSLOduration=7.956153945 podStartE2EDuration="11.106793675s" podCreationTimestamp="2025-10-14 13:25:41 +0000 UTC" firstStartedPulling="2025-10-14 13:25:42.489318098 +0000 UTC m=+659.337752907" lastFinishedPulling="2025-10-14 13:25:45.639957828 +0000 UTC m=+662.488392637" observedRunningTime="2025-10-14 13:25:46.069350776 +0000 UTC m=+662.917785635" watchObservedRunningTime="2025-10-14 13:25:52.106793675 +0000 UTC m=+668.955228494" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.405099 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9v9qj"] Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.406431 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="ovn-controller" containerID="cri-o://06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08" gracePeriod=30 Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.406993 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="sbdb" containerID="cri-o://7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b" gracePeriod=30 Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.407063 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="nbdb" containerID="cri-o://91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109" gracePeriod=30 Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.407123 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="northd" containerID="cri-o://50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916" gracePeriod=30 Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.407180 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11" gracePeriod=30 Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.407242 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="kube-rbac-proxy-node" containerID="cri-o://fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d" gracePeriod=30 Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.407299 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="ovn-acl-logging" containerID="cri-o://bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e" gracePeriod=30 Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.454130 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="ovnkube-controller" containerID="cri-o://c52eeb4bfd39980a5c20ddd0389da2011ca60292fa4c01cfea1d5c9d45d297b4" gracePeriod=30 Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.742408 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9v9qj_38d54d71-93d1-4cde-940e-a371117f59bd/ovnkube-controller/4.log" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.743398 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9v9qj_38d54d71-93d1-4cde-940e-a371117f59bd/ovnkube-controller/3.log" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.745721 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9v9qj_38d54d71-93d1-4cde-940e-a371117f59bd/ovn-acl-logging/0.log" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.746145 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9v9qj_38d54d71-93d1-4cde-940e-a371117f59bd/ovn-controller/0.log" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.746504 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.802114 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6rlq6"] Oct 14 13:26:09 crc kubenswrapper[4725]: E1014 13:26:09.802292 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="kubecfg-setup" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.802305 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="kubecfg-setup" Oct 14 13:26:09 crc kubenswrapper[4725]: E1014 13:26:09.802318 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="ovnkube-controller" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.802324 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="ovnkube-controller" Oct 14 13:26:09 crc kubenswrapper[4725]: E1014 13:26:09.802332 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="ovn-controller" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.802338 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="ovn-controller" Oct 14 13:26:09 crc kubenswrapper[4725]: E1014 13:26:09.802350 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="kube-rbac-proxy-node" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.802359 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="kube-rbac-proxy-node" Oct 14 13:26:09 crc kubenswrapper[4725]: E1014 13:26:09.802370 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="ovnkube-controller" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.802377 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="ovnkube-controller" Oct 14 13:26:09 crc kubenswrapper[4725]: E1014 13:26:09.802385 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="nbdb" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.802390 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="nbdb" Oct 14 13:26:09 crc kubenswrapper[4725]: E1014 13:26:09.802398 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="ovnkube-controller" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.802404 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="ovnkube-controller" Oct 14 13:26:09 crc kubenswrapper[4725]: E1014 13:26:09.802412 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="kube-rbac-proxy-ovn-metrics" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.802417 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="kube-rbac-proxy-ovn-metrics" Oct 14 13:26:09 crc kubenswrapper[4725]: E1014 13:26:09.802425 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="northd" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.802430 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="northd" Oct 14 13:26:09 crc kubenswrapper[4725]: E1014 13:26:09.802437 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="sbdb" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.802442 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="sbdb" Oct 14 13:26:09 crc kubenswrapper[4725]: E1014 13:26:09.802473 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="ovn-acl-logging" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.802479 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="ovn-acl-logging" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.802579 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="ovnkube-controller" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.802588 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="ovn-acl-logging" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.802597 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="ovn-controller" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.802605 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="kube-rbac-proxy-node" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.802613 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="ovnkube-controller" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.802619 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="sbdb" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.802626 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="nbdb" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.802631 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="kube-rbac-proxy-ovn-metrics" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.802638 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="northd" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.802645 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="ovnkube-controller" Oct 14 13:26:09 crc kubenswrapper[4725]: E1014 13:26:09.802726 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="ovnkube-controller" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.802733 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="ovnkube-controller" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.802827 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="ovnkube-controller" Oct 14 13:26:09 crc kubenswrapper[4725]: E1014 13:26:09.802907 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="ovnkube-controller" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.802914 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="ovnkube-controller" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.803000 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" containerName="ovnkube-controller" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.804185 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.859301 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/38d54d71-93d1-4cde-940e-a371117f59bd-env-overrides\") pod \"38d54d71-93d1-4cde-940e-a371117f59bd\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.859335 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-slash\") pod \"38d54d71-93d1-4cde-940e-a371117f59bd\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.859372 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"38d54d71-93d1-4cde-940e-a371117f59bd\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.859406 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-systemd-units\") pod \"38d54d71-93d1-4cde-940e-a371117f59bd\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.859425 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-run-openvswitch\") pod \"38d54d71-93d1-4cde-940e-a371117f59bd\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.859495 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/38d54d71-93d1-4cde-940e-a371117f59bd-ovnkube-script-lib\") pod \"38d54d71-93d1-4cde-940e-a371117f59bd\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.859515 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-run-netns\") pod \"38d54d71-93d1-4cde-940e-a371117f59bd\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.859536 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-etc-openvswitch\") pod \"38d54d71-93d1-4cde-940e-a371117f59bd\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.859557 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-cni-netd\") pod \"38d54d71-93d1-4cde-940e-a371117f59bd\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.859577 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-kubelet\") pod \"38d54d71-93d1-4cde-940e-a371117f59bd\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.859603 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h8xd\" (UniqueName: \"kubernetes.io/projected/38d54d71-93d1-4cde-940e-a371117f59bd-kube-api-access-7h8xd\") pod \"38d54d71-93d1-4cde-940e-a371117f59bd\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.859647 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-run-ovn\") pod \"38d54d71-93d1-4cde-940e-a371117f59bd\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.859677 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-log-socket\") pod \"38d54d71-93d1-4cde-940e-a371117f59bd\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.859702 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/38d54d71-93d1-4cde-940e-a371117f59bd-ovn-node-metrics-cert\") pod \"38d54d71-93d1-4cde-940e-a371117f59bd\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.859727 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-node-log\") pod \"38d54d71-93d1-4cde-940e-a371117f59bd\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.859751 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/38d54d71-93d1-4cde-940e-a371117f59bd-ovnkube-config\") pod \"38d54d71-93d1-4cde-940e-a371117f59bd\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.859770 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "38d54d71-93d1-4cde-940e-a371117f59bd" (UID: "38d54d71-93d1-4cde-940e-a371117f59bd"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.859784 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-run-ovn-kubernetes\") pod \"38d54d71-93d1-4cde-940e-a371117f59bd\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.859807 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-run-systemd\") pod \"38d54d71-93d1-4cde-940e-a371117f59bd\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.859813 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-slash" (OuterVolumeSpecName: "host-slash") pod "38d54d71-93d1-4cde-940e-a371117f59bd" (UID: "38d54d71-93d1-4cde-940e-a371117f59bd"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.859801 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "38d54d71-93d1-4cde-940e-a371117f59bd" (UID: "38d54d71-93d1-4cde-940e-a371117f59bd"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.859842 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-cni-bin\") pod \"38d54d71-93d1-4cde-940e-a371117f59bd\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.859839 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "38d54d71-93d1-4cde-940e-a371117f59bd" (UID: "38d54d71-93d1-4cde-940e-a371117f59bd"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.859873 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-var-lib-openvswitch\") pod \"38d54d71-93d1-4cde-940e-a371117f59bd\" (UID: \"38d54d71-93d1-4cde-940e-a371117f59bd\") " Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.859849 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "38d54d71-93d1-4cde-940e-a371117f59bd" (UID: "38d54d71-93d1-4cde-940e-a371117f59bd"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.859914 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "38d54d71-93d1-4cde-940e-a371117f59bd" (UID: "38d54d71-93d1-4cde-940e-a371117f59bd"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.859912 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-log-socket" (OuterVolumeSpecName: "log-socket") pod "38d54d71-93d1-4cde-940e-a371117f59bd" (UID: "38d54d71-93d1-4cde-940e-a371117f59bd"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.859965 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "38d54d71-93d1-4cde-940e-a371117f59bd" (UID: "38d54d71-93d1-4cde-940e-a371117f59bd"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.859940 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "38d54d71-93d1-4cde-940e-a371117f59bd" (UID: "38d54d71-93d1-4cde-940e-a371117f59bd"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.860008 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "38d54d71-93d1-4cde-940e-a371117f59bd" (UID: "38d54d71-93d1-4cde-940e-a371117f59bd"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.860226 4725 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.860245 4725 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-systemd-units\") on node \"crc\" DevicePath \"\"" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.860254 4725 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-run-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.860263 4725 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.860271 4725 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-cni-netd\") on node \"crc\" DevicePath \"\"" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.860281 4725 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-kubelet\") on node \"crc\" DevicePath \"\"" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.860289 4725 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.860299 4725 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-log-socket\") on node \"crc\" DevicePath \"\"" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.860307 4725 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.860318 4725 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-slash\") on node \"crc\" DevicePath \"\"" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.860349 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "38d54d71-93d1-4cde-940e-a371117f59bd" (UID: "38d54d71-93d1-4cde-940e-a371117f59bd"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.860386 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-node-log" (OuterVolumeSpecName: "node-log") pod "38d54d71-93d1-4cde-940e-a371117f59bd" (UID: "38d54d71-93d1-4cde-940e-a371117f59bd"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.859719 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "38d54d71-93d1-4cde-940e-a371117f59bd" (UID: "38d54d71-93d1-4cde-940e-a371117f59bd"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.860510 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38d54d71-93d1-4cde-940e-a371117f59bd-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "38d54d71-93d1-4cde-940e-a371117f59bd" (UID: "38d54d71-93d1-4cde-940e-a371117f59bd"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.861767 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38d54d71-93d1-4cde-940e-a371117f59bd-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "38d54d71-93d1-4cde-940e-a371117f59bd" (UID: "38d54d71-93d1-4cde-940e-a371117f59bd"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.862023 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38d54d71-93d1-4cde-940e-a371117f59bd-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "38d54d71-93d1-4cde-940e-a371117f59bd" (UID: "38d54d71-93d1-4cde-940e-a371117f59bd"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.860629 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "38d54d71-93d1-4cde-940e-a371117f59bd" (UID: "38d54d71-93d1-4cde-940e-a371117f59bd"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.868003 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38d54d71-93d1-4cde-940e-a371117f59bd-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "38d54d71-93d1-4cde-940e-a371117f59bd" (UID: "38d54d71-93d1-4cde-940e-a371117f59bd"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.869660 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38d54d71-93d1-4cde-940e-a371117f59bd-kube-api-access-7h8xd" (OuterVolumeSpecName: "kube-api-access-7h8xd") pod "38d54d71-93d1-4cde-940e-a371117f59bd" (UID: "38d54d71-93d1-4cde-940e-a371117f59bd"). InnerVolumeSpecName "kube-api-access-7h8xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.880733 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "38d54d71-93d1-4cde-940e-a371117f59bd" (UID: "38d54d71-93d1-4cde-940e-a371117f59bd"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.961124 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-run-openvswitch\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.961168 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74gws\" (UniqueName: \"kubernetes.io/projected/38856e1b-e019-49cd-ae2c-a174e2a7faa8-kube-api-access-74gws\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.961200 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-etc-openvswitch\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.961221 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/38856e1b-e019-49cd-ae2c-a174e2a7faa8-env-overrides\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.961235 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/38856e1b-e019-49cd-ae2c-a174e2a7faa8-ovnkube-script-lib\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.961251 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-host-run-netns\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.961264 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-systemd-units\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.961282 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-run-ovn\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.961383 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/38856e1b-e019-49cd-ae2c-a174e2a7faa8-ovn-node-metrics-cert\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.961475 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-node-log\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.961500 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/38856e1b-e019-49cd-ae2c-a174e2a7faa8-ovnkube-config\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.961529 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-host-kubelet\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.961602 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-host-slash\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.961630 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-host-run-ovn-kubernetes\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.961688 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-host-cni-bin\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.961731 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-run-systemd\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.961757 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.961784 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-var-lib-openvswitch\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.961807 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-host-cni-netd\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.961834 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-log-socket\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.961902 4725 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.961917 4725 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-run-systemd\") on node \"crc\" DevicePath \"\"" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.961929 4725 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-cni-bin\") on node \"crc\" DevicePath \"\"" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.961942 4725 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/38d54d71-93d1-4cde-940e-a371117f59bd-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.961955 4725 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/38d54d71-93d1-4cde-940e-a371117f59bd-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.961966 4725 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-host-run-netns\") on node \"crc\" DevicePath \"\"" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.961978 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7h8xd\" (UniqueName: \"kubernetes.io/projected/38d54d71-93d1-4cde-940e-a371117f59bd-kube-api-access-7h8xd\") on node \"crc\" DevicePath \"\"" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.961990 4725 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/38d54d71-93d1-4cde-940e-a371117f59bd-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.962002 4725 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/38d54d71-93d1-4cde-940e-a371117f59bd-node-log\") on node \"crc\" DevicePath \"\"" Oct 14 13:26:09 crc kubenswrapper[4725]: I1014 13:26:09.962013 4725 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/38d54d71-93d1-4cde-940e-a371117f59bd-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.062890 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/38856e1b-e019-49cd-ae2c-a174e2a7faa8-env-overrides\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.062923 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/38856e1b-e019-49cd-ae2c-a174e2a7faa8-ovnkube-script-lib\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.062941 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-host-run-netns\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.062958 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-systemd-units\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.062974 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-run-ovn\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.062995 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/38856e1b-e019-49cd-ae2c-a174e2a7faa8-ovn-node-metrics-cert\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.063014 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-node-log\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.063029 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/38856e1b-e019-49cd-ae2c-a174e2a7faa8-ovnkube-config\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.063053 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-host-kubelet\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.063081 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-host-slash\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.063098 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-host-run-ovn-kubernetes\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.063115 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-host-cni-bin\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.063128 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-run-systemd\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.063145 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.063174 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-var-lib-openvswitch\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.063187 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-host-cni-netd\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.063204 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-log-socket\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.063231 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-run-openvswitch\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.063246 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74gws\" (UniqueName: \"kubernetes.io/projected/38856e1b-e019-49cd-ae2c-a174e2a7faa8-kube-api-access-74gws\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.063262 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-etc-openvswitch\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.064037 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-var-lib-openvswitch\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.064050 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-run-systemd\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.064095 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.064100 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-run-openvswitch\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.064138 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-host-run-ovn-kubernetes\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.064141 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-run-ovn\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.064164 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-log-socket\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.064141 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-host-cni-netd\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.064191 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-host-run-netns\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.064217 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-systemd-units\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.064231 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-etc-openvswitch\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.064278 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-host-kubelet\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.064262 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-host-cni-bin\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.064329 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-node-log\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.064508 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/38856e1b-e019-49cd-ae2c-a174e2a7faa8-host-slash\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.065206 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/38856e1b-e019-49cd-ae2c-a174e2a7faa8-env-overrides\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.065254 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/38856e1b-e019-49cd-ae2c-a174e2a7faa8-ovnkube-script-lib\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.065291 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/38856e1b-e019-49cd-ae2c-a174e2a7faa8-ovnkube-config\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.068352 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/38856e1b-e019-49cd-ae2c-a174e2a7faa8-ovn-node-metrics-cert\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.078911 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74gws\" (UniqueName: \"kubernetes.io/projected/38856e1b-e019-49cd-ae2c-a174e2a7faa8-kube-api-access-74gws\") pod \"ovnkube-node-6rlq6\" (UID: \"38856e1b-e019-49cd-ae2c-a174e2a7faa8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.125279 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.187694 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kbgwl_d4ed727c-f4d1-47cd-a218-e22803eb1750/kube-multus/2.log" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.188120 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kbgwl_d4ed727c-f4d1-47cd-a218-e22803eb1750/kube-multus/1.log" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.188149 4725 generic.go:334] "Generic (PLEG): container finished" podID="d4ed727c-f4d1-47cd-a218-e22803eb1750" containerID="a99474c5c2939852e49d51916f4f54fb8a55b54572012502692bfefcee420f3e" exitCode=2 Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.188195 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kbgwl" event={"ID":"d4ed727c-f4d1-47cd-a218-e22803eb1750","Type":"ContainerDied","Data":"a99474c5c2939852e49d51916f4f54fb8a55b54572012502692bfefcee420f3e"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.188260 4725 scope.go:117] "RemoveContainer" containerID="b42368f230c49822144cc9910a09a22f268f840c3540e0a68f7c16659e5061bb" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.188699 4725 scope.go:117] "RemoveContainer" containerID="a99474c5c2939852e49d51916f4f54fb8a55b54572012502692bfefcee420f3e" Oct 14 13:26:10 crc kubenswrapper[4725]: E1014 13:26:10.188957 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-kbgwl_openshift-multus(d4ed727c-f4d1-47cd-a218-e22803eb1750)\"" pod="openshift-multus/multus-kbgwl" podUID="d4ed727c-f4d1-47cd-a218-e22803eb1750" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.190889 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9v9qj_38d54d71-93d1-4cde-940e-a371117f59bd/ovnkube-controller/4.log" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.192236 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9v9qj_38d54d71-93d1-4cde-940e-a371117f59bd/ovnkube-controller/3.log" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.197993 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9v9qj_38d54d71-93d1-4cde-940e-a371117f59bd/ovn-acl-logging/0.log" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.198534 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9v9qj_38d54d71-93d1-4cde-940e-a371117f59bd/ovn-controller/0.log" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.198973 4725 generic.go:334] "Generic (PLEG): container finished" podID="38d54d71-93d1-4cde-940e-a371117f59bd" containerID="c52eeb4bfd39980a5c20ddd0389da2011ca60292fa4c01cfea1d5c9d45d297b4" exitCode=2 Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199001 4725 generic.go:334] "Generic (PLEG): container finished" podID="38d54d71-93d1-4cde-940e-a371117f59bd" containerID="7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b" exitCode=0 Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199011 4725 generic.go:334] "Generic (PLEG): container finished" podID="38d54d71-93d1-4cde-940e-a371117f59bd" containerID="91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109" exitCode=0 Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199022 4725 generic.go:334] "Generic (PLEG): container finished" podID="38d54d71-93d1-4cde-940e-a371117f59bd" containerID="50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916" exitCode=0 Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199031 4725 generic.go:334] "Generic (PLEG): container finished" podID="38d54d71-93d1-4cde-940e-a371117f59bd" containerID="76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11" exitCode=0 Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199040 4725 generic.go:334] "Generic (PLEG): container finished" podID="38d54d71-93d1-4cde-940e-a371117f59bd" containerID="fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d" exitCode=0 Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199048 4725 generic.go:334] "Generic (PLEG): container finished" podID="38d54d71-93d1-4cde-940e-a371117f59bd" containerID="bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e" exitCode=143 Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199057 4725 generic.go:334] "Generic (PLEG): container finished" podID="38d54d71-93d1-4cde-940e-a371117f59bd" containerID="06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08" exitCode=143 Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199101 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" event={"ID":"38d54d71-93d1-4cde-940e-a371117f59bd","Type":"ContainerDied","Data":"c52eeb4bfd39980a5c20ddd0389da2011ca60292fa4c01cfea1d5c9d45d297b4"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199127 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" event={"ID":"38d54d71-93d1-4cde-940e-a371117f59bd","Type":"ContainerDied","Data":"7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199141 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" event={"ID":"38d54d71-93d1-4cde-940e-a371117f59bd","Type":"ContainerDied","Data":"91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199154 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" event={"ID":"38d54d71-93d1-4cde-940e-a371117f59bd","Type":"ContainerDied","Data":"50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199166 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" event={"ID":"38d54d71-93d1-4cde-940e-a371117f59bd","Type":"ContainerDied","Data":"76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199177 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" event={"ID":"38d54d71-93d1-4cde-940e-a371117f59bd","Type":"ContainerDied","Data":"fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199190 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c52eeb4bfd39980a5c20ddd0389da2011ca60292fa4c01cfea1d5c9d45d297b4"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199202 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199209 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199217 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199225 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199233 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199240 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199247 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199254 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199262 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199271 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" event={"ID":"38d54d71-93d1-4cde-940e-a371117f59bd","Type":"ContainerDied","Data":"bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199281 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c52eeb4bfd39980a5c20ddd0389da2011ca60292fa4c01cfea1d5c9d45d297b4"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199290 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199297 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199306 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199314 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199321 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199329 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199336 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199343 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199350 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199360 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" event={"ID":"38d54d71-93d1-4cde-940e-a371117f59bd","Type":"ContainerDied","Data":"06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199372 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c52eeb4bfd39980a5c20ddd0389da2011ca60292fa4c01cfea1d5c9d45d297b4"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199380 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199387 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199395 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199404 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199411 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199420 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199427 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199435 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199443 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199478 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" event={"ID":"38d54d71-93d1-4cde-940e-a371117f59bd","Type":"ContainerDied","Data":"194be38e6bd2bf02aa98279e4967036f2f2beb4448651b5b3108f5f6e8e21012"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199490 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c52eeb4bfd39980a5c20ddd0389da2011ca60292fa4c01cfea1d5c9d45d297b4"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199498 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199505 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199513 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199520 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199527 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199534 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199541 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199548 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199555 4725 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.199666 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9v9qj" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.202012 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" event={"ID":"38856e1b-e019-49cd-ae2c-a174e2a7faa8","Type":"ContainerStarted","Data":"e67328fcfb05013d60e3366cc3722d3fe37db4faf35df2b6275066ca5f7af865"} Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.230700 4725 scope.go:117] "RemoveContainer" containerID="c52eeb4bfd39980a5c20ddd0389da2011ca60292fa4c01cfea1d5c9d45d297b4" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.238511 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9v9qj"] Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.240239 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9v9qj"] Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.256665 4725 scope.go:117] "RemoveContainer" containerID="c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.281352 4725 scope.go:117] "RemoveContainer" containerID="7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.303306 4725 scope.go:117] "RemoveContainer" containerID="91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.317038 4725 scope.go:117] "RemoveContainer" containerID="50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.329540 4725 scope.go:117] "RemoveContainer" containerID="76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.344066 4725 scope.go:117] "RemoveContainer" containerID="fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.404112 4725 scope.go:117] "RemoveContainer" containerID="bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.416431 4725 scope.go:117] "RemoveContainer" containerID="06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.428998 4725 scope.go:117] "RemoveContainer" containerID="e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.442040 4725 scope.go:117] "RemoveContainer" containerID="c52eeb4bfd39980a5c20ddd0389da2011ca60292fa4c01cfea1d5c9d45d297b4" Oct 14 13:26:10 crc kubenswrapper[4725]: E1014 13:26:10.442418 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c52eeb4bfd39980a5c20ddd0389da2011ca60292fa4c01cfea1d5c9d45d297b4\": container with ID starting with c52eeb4bfd39980a5c20ddd0389da2011ca60292fa4c01cfea1d5c9d45d297b4 not found: ID does not exist" containerID="c52eeb4bfd39980a5c20ddd0389da2011ca60292fa4c01cfea1d5c9d45d297b4" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.442468 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c52eeb4bfd39980a5c20ddd0389da2011ca60292fa4c01cfea1d5c9d45d297b4"} err="failed to get container status \"c52eeb4bfd39980a5c20ddd0389da2011ca60292fa4c01cfea1d5c9d45d297b4\": rpc error: code = NotFound desc = could not find container \"c52eeb4bfd39980a5c20ddd0389da2011ca60292fa4c01cfea1d5c9d45d297b4\": container with ID starting with c52eeb4bfd39980a5c20ddd0389da2011ca60292fa4c01cfea1d5c9d45d297b4 not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.442494 4725 scope.go:117] "RemoveContainer" containerID="c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9" Oct 14 13:26:10 crc kubenswrapper[4725]: E1014 13:26:10.442953 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9\": container with ID starting with c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9 not found: ID does not exist" containerID="c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.442982 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9"} err="failed to get container status \"c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9\": rpc error: code = NotFound desc = could not find container \"c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9\": container with ID starting with c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9 not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.443005 4725 scope.go:117] "RemoveContainer" containerID="7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b" Oct 14 13:26:10 crc kubenswrapper[4725]: E1014 13:26:10.443205 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\": container with ID starting with 7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b not found: ID does not exist" containerID="7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.443230 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b"} err="failed to get container status \"7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\": rpc error: code = NotFound desc = could not find container \"7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\": container with ID starting with 7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.443244 4725 scope.go:117] "RemoveContainer" containerID="91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109" Oct 14 13:26:10 crc kubenswrapper[4725]: E1014 13:26:10.443435 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\": container with ID starting with 91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109 not found: ID does not exist" containerID="91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.443471 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109"} err="failed to get container status \"91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\": rpc error: code = NotFound desc = could not find container \"91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\": container with ID starting with 91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109 not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.443484 4725 scope.go:117] "RemoveContainer" containerID="50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916" Oct 14 13:26:10 crc kubenswrapper[4725]: E1014 13:26:10.443706 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\": container with ID starting with 50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916 not found: ID does not exist" containerID="50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.443729 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916"} err="failed to get container status \"50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\": rpc error: code = NotFound desc = could not find container \"50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\": container with ID starting with 50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916 not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.443742 4725 scope.go:117] "RemoveContainer" containerID="76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11" Oct 14 13:26:10 crc kubenswrapper[4725]: E1014 13:26:10.444041 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\": container with ID starting with 76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11 not found: ID does not exist" containerID="76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.444086 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11"} err="failed to get container status \"76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\": rpc error: code = NotFound desc = could not find container \"76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\": container with ID starting with 76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11 not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.444115 4725 scope.go:117] "RemoveContainer" containerID="fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d" Oct 14 13:26:10 crc kubenswrapper[4725]: E1014 13:26:10.444428 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\": container with ID starting with fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d not found: ID does not exist" containerID="fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.444467 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d"} err="failed to get container status \"fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\": rpc error: code = NotFound desc = could not find container \"fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\": container with ID starting with fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.444481 4725 scope.go:117] "RemoveContainer" containerID="bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e" Oct 14 13:26:10 crc kubenswrapper[4725]: E1014 13:26:10.444735 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\": container with ID starting with bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e not found: ID does not exist" containerID="bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.444752 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e"} err="failed to get container status \"bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\": rpc error: code = NotFound desc = could not find container \"bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\": container with ID starting with bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.444764 4725 scope.go:117] "RemoveContainer" containerID="06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08" Oct 14 13:26:10 crc kubenswrapper[4725]: E1014 13:26:10.445041 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\": container with ID starting with 06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08 not found: ID does not exist" containerID="06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.445067 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08"} err="failed to get container status \"06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\": rpc error: code = NotFound desc = could not find container \"06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\": container with ID starting with 06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08 not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.445083 4725 scope.go:117] "RemoveContainer" containerID="e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b" Oct 14 13:26:10 crc kubenswrapper[4725]: E1014 13:26:10.445276 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\": container with ID starting with e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b not found: ID does not exist" containerID="e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.445294 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b"} err="failed to get container status \"e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\": rpc error: code = NotFound desc = could not find container \"e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\": container with ID starting with e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.445305 4725 scope.go:117] "RemoveContainer" containerID="c52eeb4bfd39980a5c20ddd0389da2011ca60292fa4c01cfea1d5c9d45d297b4" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.445543 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c52eeb4bfd39980a5c20ddd0389da2011ca60292fa4c01cfea1d5c9d45d297b4"} err="failed to get container status \"c52eeb4bfd39980a5c20ddd0389da2011ca60292fa4c01cfea1d5c9d45d297b4\": rpc error: code = NotFound desc = could not find container \"c52eeb4bfd39980a5c20ddd0389da2011ca60292fa4c01cfea1d5c9d45d297b4\": container with ID starting with c52eeb4bfd39980a5c20ddd0389da2011ca60292fa4c01cfea1d5c9d45d297b4 not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.445563 4725 scope.go:117] "RemoveContainer" containerID="c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.445821 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9"} err="failed to get container status \"c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9\": rpc error: code = NotFound desc = could not find container \"c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9\": container with ID starting with c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9 not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.445840 4725 scope.go:117] "RemoveContainer" containerID="7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.446093 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b"} err="failed to get container status \"7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\": rpc error: code = NotFound desc = could not find container \"7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\": container with ID starting with 7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.446115 4725 scope.go:117] "RemoveContainer" containerID="91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.446339 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109"} err="failed to get container status \"91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\": rpc error: code = NotFound desc = could not find container \"91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\": container with ID starting with 91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109 not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.446355 4725 scope.go:117] "RemoveContainer" containerID="50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.446603 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916"} err="failed to get container status \"50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\": rpc error: code = NotFound desc = could not find container \"50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\": container with ID starting with 50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916 not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.446647 4725 scope.go:117] "RemoveContainer" containerID="76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.446927 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11"} err="failed to get container status \"76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\": rpc error: code = NotFound desc = could not find container \"76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\": container with ID starting with 76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11 not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.446950 4725 scope.go:117] "RemoveContainer" containerID="fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.447189 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d"} err="failed to get container status \"fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\": rpc error: code = NotFound desc = could not find container \"fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\": container with ID starting with fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.447207 4725 scope.go:117] "RemoveContainer" containerID="bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.447397 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e"} err="failed to get container status \"bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\": rpc error: code = NotFound desc = could not find container \"bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\": container with ID starting with bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.447414 4725 scope.go:117] "RemoveContainer" containerID="06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.447646 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08"} err="failed to get container status \"06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\": rpc error: code = NotFound desc = could not find container \"06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\": container with ID starting with 06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08 not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.447666 4725 scope.go:117] "RemoveContainer" containerID="e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.447960 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b"} err="failed to get container status \"e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\": rpc error: code = NotFound desc = could not find container \"e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\": container with ID starting with e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.447983 4725 scope.go:117] "RemoveContainer" containerID="c52eeb4bfd39980a5c20ddd0389da2011ca60292fa4c01cfea1d5c9d45d297b4" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.448231 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c52eeb4bfd39980a5c20ddd0389da2011ca60292fa4c01cfea1d5c9d45d297b4"} err="failed to get container status \"c52eeb4bfd39980a5c20ddd0389da2011ca60292fa4c01cfea1d5c9d45d297b4\": rpc error: code = NotFound desc = could not find container \"c52eeb4bfd39980a5c20ddd0389da2011ca60292fa4c01cfea1d5c9d45d297b4\": container with ID starting with c52eeb4bfd39980a5c20ddd0389da2011ca60292fa4c01cfea1d5c9d45d297b4 not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.448249 4725 scope.go:117] "RemoveContainer" containerID="c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.448436 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9"} err="failed to get container status \"c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9\": rpc error: code = NotFound desc = could not find container \"c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9\": container with ID starting with c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9 not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.448470 4725 scope.go:117] "RemoveContainer" containerID="7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.448669 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b"} err="failed to get container status \"7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\": rpc error: code = NotFound desc = could not find container \"7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\": container with ID starting with 7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.448690 4725 scope.go:117] "RemoveContainer" containerID="91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.448943 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109"} err="failed to get container status \"91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\": rpc error: code = NotFound desc = could not find container \"91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\": container with ID starting with 91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109 not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.448963 4725 scope.go:117] "RemoveContainer" containerID="50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.449204 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916"} err="failed to get container status \"50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\": rpc error: code = NotFound desc = could not find container \"50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\": container with ID starting with 50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916 not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.449230 4725 scope.go:117] "RemoveContainer" containerID="76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.449408 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11"} err="failed to get container status \"76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\": rpc error: code = NotFound desc = could not find container \"76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\": container with ID starting with 76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11 not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.449424 4725 scope.go:117] "RemoveContainer" containerID="fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.449626 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d"} err="failed to get container status \"fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\": rpc error: code = NotFound desc = could not find container \"fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\": container with ID starting with fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.449661 4725 scope.go:117] "RemoveContainer" containerID="bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.449857 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e"} err="failed to get container status \"bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\": rpc error: code = NotFound desc = could not find container \"bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\": container with ID starting with bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.449881 4725 scope.go:117] "RemoveContainer" containerID="06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.450099 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08"} err="failed to get container status \"06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\": rpc error: code = NotFound desc = could not find container \"06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\": container with ID starting with 06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08 not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.450116 4725 scope.go:117] "RemoveContainer" containerID="e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.450358 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b"} err="failed to get container status \"e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\": rpc error: code = NotFound desc = could not find container \"e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\": container with ID starting with e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.450373 4725 scope.go:117] "RemoveContainer" containerID="c52eeb4bfd39980a5c20ddd0389da2011ca60292fa4c01cfea1d5c9d45d297b4" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.450549 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c52eeb4bfd39980a5c20ddd0389da2011ca60292fa4c01cfea1d5c9d45d297b4"} err="failed to get container status \"c52eeb4bfd39980a5c20ddd0389da2011ca60292fa4c01cfea1d5c9d45d297b4\": rpc error: code = NotFound desc = could not find container \"c52eeb4bfd39980a5c20ddd0389da2011ca60292fa4c01cfea1d5c9d45d297b4\": container with ID starting with c52eeb4bfd39980a5c20ddd0389da2011ca60292fa4c01cfea1d5c9d45d297b4 not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.450565 4725 scope.go:117] "RemoveContainer" containerID="c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.450740 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9"} err="failed to get container status \"c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9\": rpc error: code = NotFound desc = could not find container \"c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9\": container with ID starting with c17bed2977625610ad16227eb2cf9d89f9964a0b6a3305056c46d4230602f4f9 not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.450761 4725 scope.go:117] "RemoveContainer" containerID="7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.450985 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b"} err="failed to get container status \"7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\": rpc error: code = NotFound desc = could not find container \"7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b\": container with ID starting with 7751e512d0cafe8ecfb7a11a4480e5e8e5b09b339637a98971f0be80cb9a566b not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.451002 4725 scope.go:117] "RemoveContainer" containerID="91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.451227 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109"} err="failed to get container status \"91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\": rpc error: code = NotFound desc = could not find container \"91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109\": container with ID starting with 91fa583230f32ec37cd9e36e819491a13bd831948d7fd6d0decce9d902e64109 not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.451246 4725 scope.go:117] "RemoveContainer" containerID="50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.451429 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916"} err="failed to get container status \"50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\": rpc error: code = NotFound desc = could not find container \"50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916\": container with ID starting with 50413ffd4089618652203ea331e27f636a808a15786409f0848f72c329e78916 not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.451458 4725 scope.go:117] "RemoveContainer" containerID="76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.451681 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11"} err="failed to get container status \"76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\": rpc error: code = NotFound desc = could not find container \"76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11\": container with ID starting with 76ea78a2f43632ad924dfa530b4b1a15b3742392085d5882535cfbc687d9bc11 not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.451700 4725 scope.go:117] "RemoveContainer" containerID="fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.451929 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d"} err="failed to get container status \"fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\": rpc error: code = NotFound desc = could not find container \"fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d\": container with ID starting with fc1ff66eeba391ca404149aaecb31c36c0a0f0586baa10d6d2548af6dbca0a6d not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.451948 4725 scope.go:117] "RemoveContainer" containerID="bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.452311 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e"} err="failed to get container status \"bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\": rpc error: code = NotFound desc = could not find container \"bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e\": container with ID starting with bbee4a6ecb6cf68b62f0aa2f98d61438fc354bfe7b306e4b6a597b303a286f2e not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.452339 4725 scope.go:117] "RemoveContainer" containerID="06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.452561 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08"} err="failed to get container status \"06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\": rpc error: code = NotFound desc = could not find container \"06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08\": container with ID starting with 06f18f76ba2d2cc3b739242a6c2832715e46079515bf7e14b184593ec7e41b08 not found: ID does not exist" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.452578 4725 scope.go:117] "RemoveContainer" containerID="e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b" Oct 14 13:26:10 crc kubenswrapper[4725]: I1014 13:26:10.452839 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b"} err="failed to get container status \"e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\": rpc error: code = NotFound desc = could not find container \"e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b\": container with ID starting with e2439dd0748226999d0d377ae77d736ce7d4e5c591a710ee37e3c300ba646c0b not found: ID does not exist" Oct 14 13:26:11 crc kubenswrapper[4725]: I1014 13:26:11.215022 4725 generic.go:334] "Generic (PLEG): container finished" podID="38856e1b-e019-49cd-ae2c-a174e2a7faa8" containerID="8c4fe153cd76b44088c22377d82ba38c06d46ac3a558f8a1f9139869ff558e35" exitCode=0 Oct 14 13:26:11 crc kubenswrapper[4725]: I1014 13:26:11.215078 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" event={"ID":"38856e1b-e019-49cd-ae2c-a174e2a7faa8","Type":"ContainerDied","Data":"8c4fe153cd76b44088c22377d82ba38c06d46ac3a558f8a1f9139869ff558e35"} Oct 14 13:26:11 crc kubenswrapper[4725]: I1014 13:26:11.217127 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kbgwl_d4ed727c-f4d1-47cd-a218-e22803eb1750/kube-multus/2.log" Oct 14 13:26:11 crc kubenswrapper[4725]: I1014 13:26:11.929074 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38d54d71-93d1-4cde-940e-a371117f59bd" path="/var/lib/kubelet/pods/38d54d71-93d1-4cde-940e-a371117f59bd/volumes" Oct 14 13:26:12 crc kubenswrapper[4725]: I1014 13:26:12.232333 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" event={"ID":"38856e1b-e019-49cd-ae2c-a174e2a7faa8","Type":"ContainerStarted","Data":"a0416b5bfdfec8051528ddf4ba8d836092d609fd046c428c2c1ece8a0f592e3c"} Oct 14 13:26:12 crc kubenswrapper[4725]: I1014 13:26:12.232817 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" event={"ID":"38856e1b-e019-49cd-ae2c-a174e2a7faa8","Type":"ContainerStarted","Data":"a87bc08c2cabe0bcf7ca60c69a237ef3624955cfc4b246524246b973a63746cc"} Oct 14 13:26:12 crc kubenswrapper[4725]: I1014 13:26:12.232834 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" event={"ID":"38856e1b-e019-49cd-ae2c-a174e2a7faa8","Type":"ContainerStarted","Data":"621bb323c4a58cc1059874257ab986c4dc73d6b5e5f89518185ff093021b9972"} Oct 14 13:26:12 crc kubenswrapper[4725]: I1014 13:26:12.232847 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" event={"ID":"38856e1b-e019-49cd-ae2c-a174e2a7faa8","Type":"ContainerStarted","Data":"cbfbae58219c34a7f046f279c3ce7f4f824afb6ebcc1c46c03049258cbc7d27c"} Oct 14 13:26:12 crc kubenswrapper[4725]: I1014 13:26:12.232861 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" event={"ID":"38856e1b-e019-49cd-ae2c-a174e2a7faa8","Type":"ContainerStarted","Data":"2f6ab775473dc50ac5947a919bbd47ca1aa71374a65a94a088ece224ddbb5c35"} Oct 14 13:26:12 crc kubenswrapper[4725]: I1014 13:26:12.232871 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" event={"ID":"38856e1b-e019-49cd-ae2c-a174e2a7faa8","Type":"ContainerStarted","Data":"c09633daee0949f9a85bf3d6efbd713207c65817b1c48ae9de6ccacd12775494"} Oct 14 13:26:14 crc kubenswrapper[4725]: I1014 13:26:14.249496 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" event={"ID":"38856e1b-e019-49cd-ae2c-a174e2a7faa8","Type":"ContainerStarted","Data":"02bb012ff1119a52939812b693705feca7eca013c9cb56593cb9ac86af57c890"} Oct 14 13:26:17 crc kubenswrapper[4725]: I1014 13:26:17.272469 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" event={"ID":"38856e1b-e019-49cd-ae2c-a174e2a7faa8","Type":"ContainerStarted","Data":"964097a752f27b897cb368508a7b854660cceec25e8c7b93026b34b4a6fe9737"} Oct 14 13:26:17 crc kubenswrapper[4725]: I1014 13:26:17.273008 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:17 crc kubenswrapper[4725]: I1014 13:26:17.273124 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:17 crc kubenswrapper[4725]: I1014 13:26:17.273187 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:17 crc kubenswrapper[4725]: I1014 13:26:17.298019 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:17 crc kubenswrapper[4725]: I1014 13:26:17.302158 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" podStartSLOduration=8.302140787 podStartE2EDuration="8.302140787s" podCreationTimestamp="2025-10-14 13:26:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:26:17.297307404 +0000 UTC m=+694.145742213" watchObservedRunningTime="2025-10-14 13:26:17.302140787 +0000 UTC m=+694.150575596" Oct 14 13:26:17 crc kubenswrapper[4725]: I1014 13:26:17.309021 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:23 crc kubenswrapper[4725]: I1014 13:26:23.925075 4725 scope.go:117] "RemoveContainer" containerID="a99474c5c2939852e49d51916f4f54fb8a55b54572012502692bfefcee420f3e" Oct 14 13:26:23 crc kubenswrapper[4725]: E1014 13:26:23.926288 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-kbgwl_openshift-multus(d4ed727c-f4d1-47cd-a218-e22803eb1750)\"" pod="openshift-multus/multus-kbgwl" podUID="d4ed727c-f4d1-47cd-a218-e22803eb1750" Oct 14 13:26:31 crc kubenswrapper[4725]: I1014 13:26:31.647910 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb"] Oct 14 13:26:31 crc kubenswrapper[4725]: I1014 13:26:31.650799 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb" Oct 14 13:26:31 crc kubenswrapper[4725]: I1014 13:26:31.653414 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 14 13:26:31 crc kubenswrapper[4725]: I1014 13:26:31.663709 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb"] Oct 14 13:26:31 crc kubenswrapper[4725]: I1014 13:26:31.774670 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d0546f96-2a09-4b8f-9f0d-33615b2a71b8-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb\" (UID: \"d0546f96-2a09-4b8f-9f0d-33615b2a71b8\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb" Oct 14 13:26:31 crc kubenswrapper[4725]: I1014 13:26:31.774770 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d0546f96-2a09-4b8f-9f0d-33615b2a71b8-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb\" (UID: \"d0546f96-2a09-4b8f-9f0d-33615b2a71b8\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb" Oct 14 13:26:31 crc kubenswrapper[4725]: I1014 13:26:31.775109 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ph2z\" (UniqueName: \"kubernetes.io/projected/d0546f96-2a09-4b8f-9f0d-33615b2a71b8-kube-api-access-6ph2z\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb\" (UID: \"d0546f96-2a09-4b8f-9f0d-33615b2a71b8\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb" Oct 14 13:26:31 crc kubenswrapper[4725]: I1014 13:26:31.876394 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d0546f96-2a09-4b8f-9f0d-33615b2a71b8-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb\" (UID: \"d0546f96-2a09-4b8f-9f0d-33615b2a71b8\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb" Oct 14 13:26:31 crc kubenswrapper[4725]: I1014 13:26:31.876551 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d0546f96-2a09-4b8f-9f0d-33615b2a71b8-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb\" (UID: \"d0546f96-2a09-4b8f-9f0d-33615b2a71b8\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb" Oct 14 13:26:31 crc kubenswrapper[4725]: I1014 13:26:31.876635 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ph2z\" (UniqueName: \"kubernetes.io/projected/d0546f96-2a09-4b8f-9f0d-33615b2a71b8-kube-api-access-6ph2z\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb\" (UID: \"d0546f96-2a09-4b8f-9f0d-33615b2a71b8\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb" Oct 14 13:26:31 crc kubenswrapper[4725]: I1014 13:26:31.876969 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d0546f96-2a09-4b8f-9f0d-33615b2a71b8-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb\" (UID: \"d0546f96-2a09-4b8f-9f0d-33615b2a71b8\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb" Oct 14 13:26:31 crc kubenswrapper[4725]: I1014 13:26:31.877050 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d0546f96-2a09-4b8f-9f0d-33615b2a71b8-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb\" (UID: \"d0546f96-2a09-4b8f-9f0d-33615b2a71b8\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb" Oct 14 13:26:31 crc kubenswrapper[4725]: I1014 13:26:31.894652 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ph2z\" (UniqueName: \"kubernetes.io/projected/d0546f96-2a09-4b8f-9f0d-33615b2a71b8-kube-api-access-6ph2z\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb\" (UID: \"d0546f96-2a09-4b8f-9f0d-33615b2a71b8\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb" Oct 14 13:26:31 crc kubenswrapper[4725]: I1014 13:26:31.981825 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb" Oct 14 13:26:32 crc kubenswrapper[4725]: E1014 13:26:32.006735 4725 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb_openshift-marketplace_d0546f96-2a09-4b8f-9f0d-33615b2a71b8_0(71ca8dfc3b371704743fbbefb022f20e32293745a03042ce5afa2418a8f25e40): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 14 13:26:32 crc kubenswrapper[4725]: E1014 13:26:32.006802 4725 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb_openshift-marketplace_d0546f96-2a09-4b8f-9f0d-33615b2a71b8_0(71ca8dfc3b371704743fbbefb022f20e32293745a03042ce5afa2418a8f25e40): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb" Oct 14 13:26:32 crc kubenswrapper[4725]: E1014 13:26:32.006825 4725 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb_openshift-marketplace_d0546f96-2a09-4b8f-9f0d-33615b2a71b8_0(71ca8dfc3b371704743fbbefb022f20e32293745a03042ce5afa2418a8f25e40): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb" Oct 14 13:26:32 crc kubenswrapper[4725]: E1014 13:26:32.006869 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb_openshift-marketplace(d0546f96-2a09-4b8f-9f0d-33615b2a71b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb_openshift-marketplace(d0546f96-2a09-4b8f-9f0d-33615b2a71b8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb_openshift-marketplace_d0546f96-2a09-4b8f-9f0d-33615b2a71b8_0(71ca8dfc3b371704743fbbefb022f20e32293745a03042ce5afa2418a8f25e40): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb" podUID="d0546f96-2a09-4b8f-9f0d-33615b2a71b8" Oct 14 13:26:32 crc kubenswrapper[4725]: I1014 13:26:32.354762 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb" Oct 14 13:26:32 crc kubenswrapper[4725]: I1014 13:26:32.355626 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb" Oct 14 13:26:32 crc kubenswrapper[4725]: E1014 13:26:32.389443 4725 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb_openshift-marketplace_d0546f96-2a09-4b8f-9f0d-33615b2a71b8_0(576ceea6ccdc70c206610dfc9006ee398fbaa7715bd7245bc3bc33be17790383): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 14 13:26:32 crc kubenswrapper[4725]: E1014 13:26:32.389785 4725 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb_openshift-marketplace_d0546f96-2a09-4b8f-9f0d-33615b2a71b8_0(576ceea6ccdc70c206610dfc9006ee398fbaa7715bd7245bc3bc33be17790383): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb" Oct 14 13:26:32 crc kubenswrapper[4725]: E1014 13:26:32.389833 4725 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb_openshift-marketplace_d0546f96-2a09-4b8f-9f0d-33615b2a71b8_0(576ceea6ccdc70c206610dfc9006ee398fbaa7715bd7245bc3bc33be17790383): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb" Oct 14 13:26:32 crc kubenswrapper[4725]: E1014 13:26:32.389897 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb_openshift-marketplace(d0546f96-2a09-4b8f-9f0d-33615b2a71b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb_openshift-marketplace(d0546f96-2a09-4b8f-9f0d-33615b2a71b8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb_openshift-marketplace_d0546f96-2a09-4b8f-9f0d-33615b2a71b8_0(576ceea6ccdc70c206610dfc9006ee398fbaa7715bd7245bc3bc33be17790383): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb" podUID="d0546f96-2a09-4b8f-9f0d-33615b2a71b8" Oct 14 13:26:37 crc kubenswrapper[4725]: I1014 13:26:37.921193 4725 scope.go:117] "RemoveContainer" containerID="a99474c5c2939852e49d51916f4f54fb8a55b54572012502692bfefcee420f3e" Oct 14 13:26:38 crc kubenswrapper[4725]: I1014 13:26:38.396632 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kbgwl_d4ed727c-f4d1-47cd-a218-e22803eb1750/kube-multus/2.log" Oct 14 13:26:38 crc kubenswrapper[4725]: I1014 13:26:38.396726 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kbgwl" event={"ID":"d4ed727c-f4d1-47cd-a218-e22803eb1750","Type":"ContainerStarted","Data":"1b98e7d1d5e7e4aca65a25d5a2fda340fe0c26fed2de3b8459cfb16616930673"} Oct 14 13:26:40 crc kubenswrapper[4725]: I1014 13:26:40.150798 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6rlq6" Oct 14 13:26:43 crc kubenswrapper[4725]: I1014 13:26:43.920196 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb" Oct 14 13:26:43 crc kubenswrapper[4725]: I1014 13:26:43.923845 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb" Oct 14 13:26:44 crc kubenswrapper[4725]: I1014 13:26:44.119476 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb"] Oct 14 13:26:44 crc kubenswrapper[4725]: I1014 13:26:44.438028 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb" event={"ID":"d0546f96-2a09-4b8f-9f0d-33615b2a71b8","Type":"ContainerStarted","Data":"ec3fd9ddc50c96e7d0131cc8cc3ecd373828a9121b5276dba3950128df2c23d3"} Oct 14 13:26:45 crc kubenswrapper[4725]: I1014 13:26:45.447616 4725 generic.go:334] "Generic (PLEG): container finished" podID="d0546f96-2a09-4b8f-9f0d-33615b2a71b8" containerID="79fe1e129e3c8b71f07855612ea6458c64c7c2ebbfb9d05e246256a637d30618" exitCode=0 Oct 14 13:26:45 crc kubenswrapper[4725]: I1014 13:26:45.447800 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb" event={"ID":"d0546f96-2a09-4b8f-9f0d-33615b2a71b8","Type":"ContainerDied","Data":"79fe1e129e3c8b71f07855612ea6458c64c7c2ebbfb9d05e246256a637d30618"} Oct 14 13:26:48 crc kubenswrapper[4725]: I1014 13:26:48.468316 4725 generic.go:334] "Generic (PLEG): container finished" podID="d0546f96-2a09-4b8f-9f0d-33615b2a71b8" containerID="fbfdea444bbe24d26ef2a6b7dbe537202b052f74d9ff528b91c9929bfb63bced" exitCode=0 Oct 14 13:26:48 crc kubenswrapper[4725]: I1014 13:26:48.468399 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb" event={"ID":"d0546f96-2a09-4b8f-9f0d-33615b2a71b8","Type":"ContainerDied","Data":"fbfdea444bbe24d26ef2a6b7dbe537202b052f74d9ff528b91c9929bfb63bced"} Oct 14 13:26:49 crc kubenswrapper[4725]: I1014 13:26:49.477593 4725 generic.go:334] "Generic (PLEG): container finished" podID="d0546f96-2a09-4b8f-9f0d-33615b2a71b8" containerID="19ebd16fab2910e624358ad1df4df7a4df54244e69edcd6c0183654a6853daf2" exitCode=0 Oct 14 13:26:49 crc kubenswrapper[4725]: I1014 13:26:49.477671 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb" event={"ID":"d0546f96-2a09-4b8f-9f0d-33615b2a71b8","Type":"ContainerDied","Data":"19ebd16fab2910e624358ad1df4df7a4df54244e69edcd6c0183654a6853daf2"} Oct 14 13:26:50 crc kubenswrapper[4725]: I1014 13:26:50.728146 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb" Oct 14 13:26:50 crc kubenswrapper[4725]: I1014 13:26:50.855789 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d0546f96-2a09-4b8f-9f0d-33615b2a71b8-util\") pod \"d0546f96-2a09-4b8f-9f0d-33615b2a71b8\" (UID: \"d0546f96-2a09-4b8f-9f0d-33615b2a71b8\") " Oct 14 13:26:50 crc kubenswrapper[4725]: I1014 13:26:50.855915 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d0546f96-2a09-4b8f-9f0d-33615b2a71b8-bundle\") pod \"d0546f96-2a09-4b8f-9f0d-33615b2a71b8\" (UID: \"d0546f96-2a09-4b8f-9f0d-33615b2a71b8\") " Oct 14 13:26:50 crc kubenswrapper[4725]: I1014 13:26:50.855968 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ph2z\" (UniqueName: \"kubernetes.io/projected/d0546f96-2a09-4b8f-9f0d-33615b2a71b8-kube-api-access-6ph2z\") pod \"d0546f96-2a09-4b8f-9f0d-33615b2a71b8\" (UID: \"d0546f96-2a09-4b8f-9f0d-33615b2a71b8\") " Oct 14 13:26:50 crc kubenswrapper[4725]: I1014 13:26:50.856444 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0546f96-2a09-4b8f-9f0d-33615b2a71b8-bundle" (OuterVolumeSpecName: "bundle") pod "d0546f96-2a09-4b8f-9f0d-33615b2a71b8" (UID: "d0546f96-2a09-4b8f-9f0d-33615b2a71b8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:26:50 crc kubenswrapper[4725]: I1014 13:26:50.860817 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0546f96-2a09-4b8f-9f0d-33615b2a71b8-kube-api-access-6ph2z" (OuterVolumeSpecName: "kube-api-access-6ph2z") pod "d0546f96-2a09-4b8f-9f0d-33615b2a71b8" (UID: "d0546f96-2a09-4b8f-9f0d-33615b2a71b8"). InnerVolumeSpecName "kube-api-access-6ph2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:26:50 crc kubenswrapper[4725]: I1014 13:26:50.867578 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0546f96-2a09-4b8f-9f0d-33615b2a71b8-util" (OuterVolumeSpecName: "util") pod "d0546f96-2a09-4b8f-9f0d-33615b2a71b8" (UID: "d0546f96-2a09-4b8f-9f0d-33615b2a71b8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:26:50 crc kubenswrapper[4725]: I1014 13:26:50.957614 4725 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d0546f96-2a09-4b8f-9f0d-33615b2a71b8-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:26:50 crc kubenswrapper[4725]: I1014 13:26:50.957757 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ph2z\" (UniqueName: \"kubernetes.io/projected/d0546f96-2a09-4b8f-9f0d-33615b2a71b8-kube-api-access-6ph2z\") on node \"crc\" DevicePath \"\"" Oct 14 13:26:50 crc kubenswrapper[4725]: I1014 13:26:50.957775 4725 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d0546f96-2a09-4b8f-9f0d-33615b2a71b8-util\") on node \"crc\" DevicePath \"\"" Oct 14 13:26:51 crc kubenswrapper[4725]: I1014 13:26:51.493755 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb" event={"ID":"d0546f96-2a09-4b8f-9f0d-33615b2a71b8","Type":"ContainerDied","Data":"ec3fd9ddc50c96e7d0131cc8cc3ecd373828a9121b5276dba3950128df2c23d3"} Oct 14 13:26:51 crc kubenswrapper[4725]: I1014 13:26:51.493816 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec3fd9ddc50c96e7d0131cc8cc3ecd373828a9121b5276dba3950128df2c23d3" Oct 14 13:26:51 crc kubenswrapper[4725]: I1014 13:26:51.493912 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb" Oct 14 13:26:53 crc kubenswrapper[4725]: I1014 13:26:53.314030 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-kgnhm"] Oct 14 13:26:53 crc kubenswrapper[4725]: E1014 13:26:53.314592 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0546f96-2a09-4b8f-9f0d-33615b2a71b8" containerName="pull" Oct 14 13:26:53 crc kubenswrapper[4725]: I1014 13:26:53.314609 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0546f96-2a09-4b8f-9f0d-33615b2a71b8" containerName="pull" Oct 14 13:26:53 crc kubenswrapper[4725]: E1014 13:26:53.314625 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0546f96-2a09-4b8f-9f0d-33615b2a71b8" containerName="extract" Oct 14 13:26:53 crc kubenswrapper[4725]: I1014 13:26:53.314633 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0546f96-2a09-4b8f-9f0d-33615b2a71b8" containerName="extract" Oct 14 13:26:53 crc kubenswrapper[4725]: E1014 13:26:53.314660 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0546f96-2a09-4b8f-9f0d-33615b2a71b8" containerName="util" Oct 14 13:26:53 crc kubenswrapper[4725]: I1014 13:26:53.314685 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0546f96-2a09-4b8f-9f0d-33615b2a71b8" containerName="util" Oct 14 13:26:53 crc kubenswrapper[4725]: I1014 13:26:53.314803 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0546f96-2a09-4b8f-9f0d-33615b2a71b8" containerName="extract" Oct 14 13:26:53 crc kubenswrapper[4725]: I1014 13:26:53.315210 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-kgnhm" Oct 14 13:26:53 crc kubenswrapper[4725]: I1014 13:26:53.317532 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Oct 14 13:26:53 crc kubenswrapper[4725]: I1014 13:26:53.317794 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-j9pc7" Oct 14 13:26:53 crc kubenswrapper[4725]: I1014 13:26:53.318536 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Oct 14 13:26:53 crc kubenswrapper[4725]: I1014 13:26:53.324700 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-kgnhm"] Oct 14 13:26:53 crc kubenswrapper[4725]: I1014 13:26:53.485697 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rttf\" (UniqueName: \"kubernetes.io/projected/f4b6e00a-85f8-4036-abc1-c53043f84612-kube-api-access-4rttf\") pod \"nmstate-operator-858ddd8f98-kgnhm\" (UID: \"f4b6e00a-85f8-4036-abc1-c53043f84612\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-kgnhm" Oct 14 13:26:53 crc kubenswrapper[4725]: I1014 13:26:53.587193 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rttf\" (UniqueName: \"kubernetes.io/projected/f4b6e00a-85f8-4036-abc1-c53043f84612-kube-api-access-4rttf\") pod \"nmstate-operator-858ddd8f98-kgnhm\" (UID: \"f4b6e00a-85f8-4036-abc1-c53043f84612\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-kgnhm" Oct 14 13:26:53 crc kubenswrapper[4725]: I1014 13:26:53.604261 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rttf\" (UniqueName: \"kubernetes.io/projected/f4b6e00a-85f8-4036-abc1-c53043f84612-kube-api-access-4rttf\") pod \"nmstate-operator-858ddd8f98-kgnhm\" (UID: \"f4b6e00a-85f8-4036-abc1-c53043f84612\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-kgnhm" Oct 14 13:26:53 crc kubenswrapper[4725]: I1014 13:26:53.631711 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-kgnhm" Oct 14 13:26:53 crc kubenswrapper[4725]: I1014 13:26:53.849430 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-kgnhm"] Oct 14 13:26:54 crc kubenswrapper[4725]: I1014 13:26:54.509765 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-kgnhm" event={"ID":"f4b6e00a-85f8-4036-abc1-c53043f84612","Type":"ContainerStarted","Data":"e24e1ee8ac51fd3190e6b4552643c35188bfb86ce5546aa6590067e27d873ede"} Oct 14 13:27:01 crc kubenswrapper[4725]: I1014 13:27:01.545904 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-kgnhm" event={"ID":"f4b6e00a-85f8-4036-abc1-c53043f84612","Type":"ContainerStarted","Data":"d1856d6921aa98f85a833d2226ebb0ff651fdab2232199ac36c83fccc1237d64"} Oct 14 13:27:01 crc kubenswrapper[4725]: I1014 13:27:01.569015 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-858ddd8f98-kgnhm" podStartSLOduration=1.523656823 podStartE2EDuration="8.568987472s" podCreationTimestamp="2025-10-14 13:26:53 +0000 UTC" firstStartedPulling="2025-10-14 13:26:53.85867444 +0000 UTC m=+730.707109249" lastFinishedPulling="2025-10-14 13:27:00.904005089 +0000 UTC m=+737.752439898" observedRunningTime="2025-10-14 13:27:01.564778906 +0000 UTC m=+738.413213715" watchObservedRunningTime="2025-10-14 13:27:01.568987472 +0000 UTC m=+738.417422281" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.520499 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.520559 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.541110 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-zxznn"] Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.542252 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-zxznn" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.548224 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-mb6xr" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.550915 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-bvg5g"] Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.551905 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-bvg5g" Oct 14 13:27:02 crc kubenswrapper[4725]: W1014 13:27:02.556808 4725 reflector.go:561] object-"openshift-nmstate"/"openshift-nmstate-webhook": failed to list *v1.Secret: secrets "openshift-nmstate-webhook" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-nmstate": no relationship found between node 'crc' and this object Oct 14 13:27:02 crc kubenswrapper[4725]: E1014 13:27:02.556858 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"openshift-nmstate-webhook\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-nmstate-webhook\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-nmstate\": no relationship found between node 'crc' and this object" logger="UnhandledError" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.563551 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-bvg5g"] Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.576885 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-5j272"] Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.577709 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-5j272" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.598396 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-zxznn"] Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.673987 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-nwf94"] Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.674690 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-nwf94" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.693253 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.693682 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.693715 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-hgtvg" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.698826 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-nwf94"] Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.754964 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3f7ba899-2a43-4866-b3d7-34b6ca02b7e4-nmstate-lock\") pod \"nmstate-handler-5j272\" (UID: \"3f7ba899-2a43-4866-b3d7-34b6ca02b7e4\") " pod="openshift-nmstate/nmstate-handler-5j272" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.755321 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppvgw\" (UniqueName: \"kubernetes.io/projected/3f7ba899-2a43-4866-b3d7-34b6ca02b7e4-kube-api-access-ppvgw\") pod \"nmstate-handler-5j272\" (UID: \"3f7ba899-2a43-4866-b3d7-34b6ca02b7e4\") " pod="openshift-nmstate/nmstate-handler-5j272" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.755520 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9a636636-e68e-4e0f-ac55-b64f6e886b0e-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-bvg5g\" (UID: \"9a636636-e68e-4e0f-ac55-b64f6e886b0e\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-bvg5g" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.755672 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6vch\" (UniqueName: \"kubernetes.io/projected/42b236b2-dcbb-4c0c-8916-1eba5e90f301-kube-api-access-f6vch\") pod \"nmstate-metrics-fdff9cb8d-zxznn\" (UID: \"42b236b2-dcbb-4c0c-8916-1eba5e90f301\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-zxznn" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.755772 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bp5n\" (UniqueName: \"kubernetes.io/projected/9a636636-e68e-4e0f-ac55-b64f6e886b0e-kube-api-access-8bp5n\") pod \"nmstate-webhook-6cdbc54649-bvg5g\" (UID: \"9a636636-e68e-4e0f-ac55-b64f6e886b0e\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-bvg5g" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.755867 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3f7ba899-2a43-4866-b3d7-34b6ca02b7e4-ovs-socket\") pod \"nmstate-handler-5j272\" (UID: \"3f7ba899-2a43-4866-b3d7-34b6ca02b7e4\") " pod="openshift-nmstate/nmstate-handler-5j272" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.755967 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3f7ba899-2a43-4866-b3d7-34b6ca02b7e4-dbus-socket\") pod \"nmstate-handler-5j272\" (UID: \"3f7ba899-2a43-4866-b3d7-34b6ca02b7e4\") " pod="openshift-nmstate/nmstate-handler-5j272" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.856428 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5ed299ff-7f75-4376-a446-2f24b1d1e539-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-nwf94\" (UID: \"5ed299ff-7f75-4376-a446-2f24b1d1e539\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-nwf94" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.856545 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3f7ba899-2a43-4866-b3d7-34b6ca02b7e4-nmstate-lock\") pod \"nmstate-handler-5j272\" (UID: \"3f7ba899-2a43-4866-b3d7-34b6ca02b7e4\") " pod="openshift-nmstate/nmstate-handler-5j272" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.856583 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ed299ff-7f75-4376-a446-2f24b1d1e539-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-nwf94\" (UID: \"5ed299ff-7f75-4376-a446-2f24b1d1e539\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-nwf94" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.856620 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppvgw\" (UniqueName: \"kubernetes.io/projected/3f7ba899-2a43-4866-b3d7-34b6ca02b7e4-kube-api-access-ppvgw\") pod \"nmstate-handler-5j272\" (UID: \"3f7ba899-2a43-4866-b3d7-34b6ca02b7e4\") " pod="openshift-nmstate/nmstate-handler-5j272" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.856657 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9a636636-e68e-4e0f-ac55-b64f6e886b0e-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-bvg5g\" (UID: \"9a636636-e68e-4e0f-ac55-b64f6e886b0e\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-bvg5g" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.856714 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6vch\" (UniqueName: \"kubernetes.io/projected/42b236b2-dcbb-4c0c-8916-1eba5e90f301-kube-api-access-f6vch\") pod \"nmstate-metrics-fdff9cb8d-zxznn\" (UID: \"42b236b2-dcbb-4c0c-8916-1eba5e90f301\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-zxznn" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.856754 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bp5n\" (UniqueName: \"kubernetes.io/projected/9a636636-e68e-4e0f-ac55-b64f6e886b0e-kube-api-access-8bp5n\") pod \"nmstate-webhook-6cdbc54649-bvg5g\" (UID: \"9a636636-e68e-4e0f-ac55-b64f6e886b0e\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-bvg5g" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.856779 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3f7ba899-2a43-4866-b3d7-34b6ca02b7e4-ovs-socket\") pod \"nmstate-handler-5j272\" (UID: \"3f7ba899-2a43-4866-b3d7-34b6ca02b7e4\") " pod="openshift-nmstate/nmstate-handler-5j272" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.856805 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz257\" (UniqueName: \"kubernetes.io/projected/5ed299ff-7f75-4376-a446-2f24b1d1e539-kube-api-access-vz257\") pod \"nmstate-console-plugin-6b874cbd85-nwf94\" (UID: \"5ed299ff-7f75-4376-a446-2f24b1d1e539\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-nwf94" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.856836 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3f7ba899-2a43-4866-b3d7-34b6ca02b7e4-dbus-socket\") pod \"nmstate-handler-5j272\" (UID: \"3f7ba899-2a43-4866-b3d7-34b6ca02b7e4\") " pod="openshift-nmstate/nmstate-handler-5j272" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.857097 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3f7ba899-2a43-4866-b3d7-34b6ca02b7e4-dbus-socket\") pod \"nmstate-handler-5j272\" (UID: \"3f7ba899-2a43-4866-b3d7-34b6ca02b7e4\") " pod="openshift-nmstate/nmstate-handler-5j272" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.857154 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3f7ba899-2a43-4866-b3d7-34b6ca02b7e4-ovs-socket\") pod \"nmstate-handler-5j272\" (UID: \"3f7ba899-2a43-4866-b3d7-34b6ca02b7e4\") " pod="openshift-nmstate/nmstate-handler-5j272" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.857161 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3f7ba899-2a43-4866-b3d7-34b6ca02b7e4-nmstate-lock\") pod \"nmstate-handler-5j272\" (UID: \"3f7ba899-2a43-4866-b3d7-34b6ca02b7e4\") " pod="openshift-nmstate/nmstate-handler-5j272" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.876185 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppvgw\" (UniqueName: \"kubernetes.io/projected/3f7ba899-2a43-4866-b3d7-34b6ca02b7e4-kube-api-access-ppvgw\") pod \"nmstate-handler-5j272\" (UID: \"3f7ba899-2a43-4866-b3d7-34b6ca02b7e4\") " pod="openshift-nmstate/nmstate-handler-5j272" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.877873 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6vch\" (UniqueName: \"kubernetes.io/projected/42b236b2-dcbb-4c0c-8916-1eba5e90f301-kube-api-access-f6vch\") pod \"nmstate-metrics-fdff9cb8d-zxznn\" (UID: \"42b236b2-dcbb-4c0c-8916-1eba5e90f301\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-zxznn" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.878020 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bp5n\" (UniqueName: \"kubernetes.io/projected/9a636636-e68e-4e0f-ac55-b64f6e886b0e-kube-api-access-8bp5n\") pod \"nmstate-webhook-6cdbc54649-bvg5g\" (UID: \"9a636636-e68e-4e0f-ac55-b64f6e886b0e\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-bvg5g" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.887902 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5658697489-hvjvm"] Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.888718 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5658697489-hvjvm" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.889973 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-5j272" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.891776 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5658697489-hvjvm"] Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.962147 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ed299ff-7f75-4376-a446-2f24b1d1e539-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-nwf94\" (UID: \"5ed299ff-7f75-4376-a446-2f24b1d1e539\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-nwf94" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.962231 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz257\" (UniqueName: \"kubernetes.io/projected/5ed299ff-7f75-4376-a446-2f24b1d1e539-kube-api-access-vz257\") pod \"nmstate-console-plugin-6b874cbd85-nwf94\" (UID: \"5ed299ff-7f75-4376-a446-2f24b1d1e539\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-nwf94" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.962256 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5ed299ff-7f75-4376-a446-2f24b1d1e539-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-nwf94\" (UID: \"5ed299ff-7f75-4376-a446-2f24b1d1e539\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-nwf94" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.963388 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5ed299ff-7f75-4376-a446-2f24b1d1e539-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-nwf94\" (UID: \"5ed299ff-7f75-4376-a446-2f24b1d1e539\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-nwf94" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.973980 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ed299ff-7f75-4376-a446-2f24b1d1e539-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-nwf94\" (UID: \"5ed299ff-7f75-4376-a446-2f24b1d1e539\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-nwf94" Oct 14 13:27:02 crc kubenswrapper[4725]: I1014 13:27:02.980441 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz257\" (UniqueName: \"kubernetes.io/projected/5ed299ff-7f75-4376-a446-2f24b1d1e539-kube-api-access-vz257\") pod \"nmstate-console-plugin-6b874cbd85-nwf94\" (UID: \"5ed299ff-7f75-4376-a446-2f24b1d1e539\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-nwf94" Oct 14 13:27:03 crc kubenswrapper[4725]: I1014 13:27:03.002502 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-nwf94" Oct 14 13:27:03 crc kubenswrapper[4725]: I1014 13:27:03.063654 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dde0d1e4-1d1c-4747-b499-d50fc70a6220-trusted-ca-bundle\") pod \"console-5658697489-hvjvm\" (UID: \"dde0d1e4-1d1c-4747-b499-d50fc70a6220\") " pod="openshift-console/console-5658697489-hvjvm" Oct 14 13:27:03 crc kubenswrapper[4725]: I1014 13:27:03.063756 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsksb\" (UniqueName: \"kubernetes.io/projected/dde0d1e4-1d1c-4747-b499-d50fc70a6220-kube-api-access-rsksb\") pod \"console-5658697489-hvjvm\" (UID: \"dde0d1e4-1d1c-4747-b499-d50fc70a6220\") " pod="openshift-console/console-5658697489-hvjvm" Oct 14 13:27:03 crc kubenswrapper[4725]: I1014 13:27:03.064205 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dde0d1e4-1d1c-4747-b499-d50fc70a6220-service-ca\") pod \"console-5658697489-hvjvm\" (UID: \"dde0d1e4-1d1c-4747-b499-d50fc70a6220\") " pod="openshift-console/console-5658697489-hvjvm" Oct 14 13:27:03 crc kubenswrapper[4725]: I1014 13:27:03.064240 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dde0d1e4-1d1c-4747-b499-d50fc70a6220-oauth-serving-cert\") pod \"console-5658697489-hvjvm\" (UID: \"dde0d1e4-1d1c-4747-b499-d50fc70a6220\") " pod="openshift-console/console-5658697489-hvjvm" Oct 14 13:27:03 crc kubenswrapper[4725]: I1014 13:27:03.064263 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dde0d1e4-1d1c-4747-b499-d50fc70a6220-console-config\") pod \"console-5658697489-hvjvm\" (UID: \"dde0d1e4-1d1c-4747-b499-d50fc70a6220\") " pod="openshift-console/console-5658697489-hvjvm" Oct 14 13:27:03 crc kubenswrapper[4725]: I1014 13:27:03.064291 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dde0d1e4-1d1c-4747-b499-d50fc70a6220-console-oauth-config\") pod \"console-5658697489-hvjvm\" (UID: \"dde0d1e4-1d1c-4747-b499-d50fc70a6220\") " pod="openshift-console/console-5658697489-hvjvm" Oct 14 13:27:03 crc kubenswrapper[4725]: I1014 13:27:03.064312 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dde0d1e4-1d1c-4747-b499-d50fc70a6220-console-serving-cert\") pod \"console-5658697489-hvjvm\" (UID: \"dde0d1e4-1d1c-4747-b499-d50fc70a6220\") " pod="openshift-console/console-5658697489-hvjvm" Oct 14 13:27:03 crc kubenswrapper[4725]: I1014 13:27:03.167831 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsksb\" (UniqueName: \"kubernetes.io/projected/dde0d1e4-1d1c-4747-b499-d50fc70a6220-kube-api-access-rsksb\") pod \"console-5658697489-hvjvm\" (UID: \"dde0d1e4-1d1c-4747-b499-d50fc70a6220\") " pod="openshift-console/console-5658697489-hvjvm" Oct 14 13:27:03 crc kubenswrapper[4725]: I1014 13:27:03.168379 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dde0d1e4-1d1c-4747-b499-d50fc70a6220-service-ca\") pod \"console-5658697489-hvjvm\" (UID: \"dde0d1e4-1d1c-4747-b499-d50fc70a6220\") " pod="openshift-console/console-5658697489-hvjvm" Oct 14 13:27:03 crc kubenswrapper[4725]: I1014 13:27:03.168402 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dde0d1e4-1d1c-4747-b499-d50fc70a6220-oauth-serving-cert\") pod \"console-5658697489-hvjvm\" (UID: \"dde0d1e4-1d1c-4747-b499-d50fc70a6220\") " pod="openshift-console/console-5658697489-hvjvm" Oct 14 13:27:03 crc kubenswrapper[4725]: I1014 13:27:03.168419 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dde0d1e4-1d1c-4747-b499-d50fc70a6220-console-config\") pod \"console-5658697489-hvjvm\" (UID: \"dde0d1e4-1d1c-4747-b499-d50fc70a6220\") " pod="openshift-console/console-5658697489-hvjvm" Oct 14 13:27:03 crc kubenswrapper[4725]: I1014 13:27:03.168438 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dde0d1e4-1d1c-4747-b499-d50fc70a6220-console-oauth-config\") pod \"console-5658697489-hvjvm\" (UID: \"dde0d1e4-1d1c-4747-b499-d50fc70a6220\") " pod="openshift-console/console-5658697489-hvjvm" Oct 14 13:27:03 crc kubenswrapper[4725]: I1014 13:27:03.168500 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dde0d1e4-1d1c-4747-b499-d50fc70a6220-console-serving-cert\") pod \"console-5658697489-hvjvm\" (UID: \"dde0d1e4-1d1c-4747-b499-d50fc70a6220\") " pod="openshift-console/console-5658697489-hvjvm" Oct 14 13:27:03 crc kubenswrapper[4725]: I1014 13:27:03.167879 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-zxznn" Oct 14 13:27:03 crc kubenswrapper[4725]: I1014 13:27:03.168578 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dde0d1e4-1d1c-4747-b499-d50fc70a6220-trusted-ca-bundle\") pod \"console-5658697489-hvjvm\" (UID: \"dde0d1e4-1d1c-4747-b499-d50fc70a6220\") " pod="openshift-console/console-5658697489-hvjvm" Oct 14 13:27:03 crc kubenswrapper[4725]: I1014 13:27:03.170307 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dde0d1e4-1d1c-4747-b499-d50fc70a6220-service-ca\") pod \"console-5658697489-hvjvm\" (UID: \"dde0d1e4-1d1c-4747-b499-d50fc70a6220\") " pod="openshift-console/console-5658697489-hvjvm" Oct 14 13:27:03 crc kubenswrapper[4725]: I1014 13:27:03.170394 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dde0d1e4-1d1c-4747-b499-d50fc70a6220-oauth-serving-cert\") pod \"console-5658697489-hvjvm\" (UID: \"dde0d1e4-1d1c-4747-b499-d50fc70a6220\") " pod="openshift-console/console-5658697489-hvjvm" Oct 14 13:27:03 crc kubenswrapper[4725]: I1014 13:27:03.170578 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dde0d1e4-1d1c-4747-b499-d50fc70a6220-console-config\") pod \"console-5658697489-hvjvm\" (UID: \"dde0d1e4-1d1c-4747-b499-d50fc70a6220\") " pod="openshift-console/console-5658697489-hvjvm" Oct 14 13:27:03 crc kubenswrapper[4725]: I1014 13:27:03.171145 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dde0d1e4-1d1c-4747-b499-d50fc70a6220-trusted-ca-bundle\") pod \"console-5658697489-hvjvm\" (UID: \"dde0d1e4-1d1c-4747-b499-d50fc70a6220\") " pod="openshift-console/console-5658697489-hvjvm" Oct 14 13:27:03 crc kubenswrapper[4725]: I1014 13:27:03.173355 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-nwf94"] Oct 14 13:27:03 crc kubenswrapper[4725]: I1014 13:27:03.174158 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dde0d1e4-1d1c-4747-b499-d50fc70a6220-console-serving-cert\") pod \"console-5658697489-hvjvm\" (UID: \"dde0d1e4-1d1c-4747-b499-d50fc70a6220\") " pod="openshift-console/console-5658697489-hvjvm" Oct 14 13:27:03 crc kubenswrapper[4725]: I1014 13:27:03.174922 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dde0d1e4-1d1c-4747-b499-d50fc70a6220-console-oauth-config\") pod \"console-5658697489-hvjvm\" (UID: \"dde0d1e4-1d1c-4747-b499-d50fc70a6220\") " pod="openshift-console/console-5658697489-hvjvm" Oct 14 13:27:03 crc kubenswrapper[4725]: I1014 13:27:03.190415 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsksb\" (UniqueName: \"kubernetes.io/projected/dde0d1e4-1d1c-4747-b499-d50fc70a6220-kube-api-access-rsksb\") pod \"console-5658697489-hvjvm\" (UID: \"dde0d1e4-1d1c-4747-b499-d50fc70a6220\") " pod="openshift-console/console-5658697489-hvjvm" Oct 14 13:27:03 crc kubenswrapper[4725]: I1014 13:27:03.242782 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5658697489-hvjvm" Oct 14 13:27:03 crc kubenswrapper[4725]: I1014 13:27:03.382501 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-zxznn"] Oct 14 13:27:03 crc kubenswrapper[4725]: W1014 13:27:03.392143 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42b236b2_dcbb_4c0c_8916_1eba5e90f301.slice/crio-a71a69f679158d9a88268225085e26c7e786724968b22dbd39a47a19d843d2bc WatchSource:0}: Error finding container a71a69f679158d9a88268225085e26c7e786724968b22dbd39a47a19d843d2bc: Status 404 returned error can't find the container with id a71a69f679158d9a88268225085e26c7e786724968b22dbd39a47a19d843d2bc Oct 14 13:27:03 crc kubenswrapper[4725]: I1014 13:27:03.524040 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5658697489-hvjvm"] Oct 14 13:27:03 crc kubenswrapper[4725]: W1014 13:27:03.528550 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddde0d1e4_1d1c_4747_b499_d50fc70a6220.slice/crio-e828467e618d913adb50b8824bcb56143043b7fe572f933d04f74dea0c4a22dc WatchSource:0}: Error finding container e828467e618d913adb50b8824bcb56143043b7fe572f933d04f74dea0c4a22dc: Status 404 returned error can't find the container with id e828467e618d913adb50b8824bcb56143043b7fe572f933d04f74dea0c4a22dc Oct 14 13:27:03 crc kubenswrapper[4725]: I1014 13:27:03.558352 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-nwf94" event={"ID":"5ed299ff-7f75-4376-a446-2f24b1d1e539","Type":"ContainerStarted","Data":"e751f98c85ca26eddef4b4244663ba6e2dd4753b6d9cf6d2b5590568ed577c42"} Oct 14 13:27:03 crc kubenswrapper[4725]: I1014 13:27:03.559854 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-zxznn" event={"ID":"42b236b2-dcbb-4c0c-8916-1eba5e90f301","Type":"ContainerStarted","Data":"a71a69f679158d9a88268225085e26c7e786724968b22dbd39a47a19d843d2bc"} Oct 14 13:27:03 crc kubenswrapper[4725]: I1014 13:27:03.560766 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5658697489-hvjvm" event={"ID":"dde0d1e4-1d1c-4747-b499-d50fc70a6220","Type":"ContainerStarted","Data":"e828467e618d913adb50b8824bcb56143043b7fe572f933d04f74dea0c4a22dc"} Oct 14 13:27:03 crc kubenswrapper[4725]: I1014 13:27:03.562138 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-5j272" event={"ID":"3f7ba899-2a43-4866-b3d7-34b6ca02b7e4","Type":"ContainerStarted","Data":"946d23d67728c1e9c8c3a3e34e42e49489eb68319d2a2907610bc683aeac9832"} Oct 14 13:27:03 crc kubenswrapper[4725]: E1014 13:27:03.857706 4725 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: failed to sync secret cache: timed out waiting for the condition Oct 14 13:27:03 crc kubenswrapper[4725]: E1014 13:27:03.858095 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a636636-e68e-4e0f-ac55-b64f6e886b0e-tls-key-pair podName:9a636636-e68e-4e0f-ac55-b64f6e886b0e nodeName:}" failed. No retries permitted until 2025-10-14 13:27:04.358071137 +0000 UTC m=+741.206505946 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/9a636636-e68e-4e0f-ac55-b64f6e886b0e-tls-key-pair") pod "nmstate-webhook-6cdbc54649-bvg5g" (UID: "9a636636-e68e-4e0f-ac55-b64f6e886b0e") : failed to sync secret cache: timed out waiting for the condition Oct 14 13:27:04 crc kubenswrapper[4725]: I1014 13:27:04.053747 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Oct 14 13:27:04 crc kubenswrapper[4725]: I1014 13:27:04.393542 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9a636636-e68e-4e0f-ac55-b64f6e886b0e-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-bvg5g\" (UID: \"9a636636-e68e-4e0f-ac55-b64f6e886b0e\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-bvg5g" Oct 14 13:27:04 crc kubenswrapper[4725]: I1014 13:27:04.402296 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9a636636-e68e-4e0f-ac55-b64f6e886b0e-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-bvg5g\" (UID: \"9a636636-e68e-4e0f-ac55-b64f6e886b0e\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-bvg5g" Oct 14 13:27:04 crc kubenswrapper[4725]: I1014 13:27:04.573611 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5658697489-hvjvm" event={"ID":"dde0d1e4-1d1c-4747-b499-d50fc70a6220","Type":"ContainerStarted","Data":"4705b3fc9ebf4e48820682ce9be658bb1fe63a6c7966da987ce30cc464ea8838"} Oct 14 13:27:04 crc kubenswrapper[4725]: I1014 13:27:04.594847 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5658697489-hvjvm" podStartSLOduration=2.59479427 podStartE2EDuration="2.59479427s" podCreationTimestamp="2025-10-14 13:27:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:27:04.589954217 +0000 UTC m=+741.438389046" watchObservedRunningTime="2025-10-14 13:27:04.59479427 +0000 UTC m=+741.443229079" Oct 14 13:27:04 crc kubenswrapper[4725]: I1014 13:27:04.674472 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-bvg5g" Oct 14 13:27:05 crc kubenswrapper[4725]: I1014 13:27:05.021835 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-bvg5g"] Oct 14 13:27:05 crc kubenswrapper[4725]: W1014 13:27:05.031654 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a636636_e68e_4e0f_ac55_b64f6e886b0e.slice/crio-4321393ec5dff3db3ecdd5901d7072877d3c1f463bec38f37aebd37930fcf553 WatchSource:0}: Error finding container 4321393ec5dff3db3ecdd5901d7072877d3c1f463bec38f37aebd37930fcf553: Status 404 returned error can't find the container with id 4321393ec5dff3db3ecdd5901d7072877d3c1f463bec38f37aebd37930fcf553 Oct 14 13:27:05 crc kubenswrapper[4725]: I1014 13:27:05.580458 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-bvg5g" event={"ID":"9a636636-e68e-4e0f-ac55-b64f6e886b0e","Type":"ContainerStarted","Data":"4321393ec5dff3db3ecdd5901d7072877d3c1f463bec38f37aebd37930fcf553"} Oct 14 13:27:08 crc kubenswrapper[4725]: I1014 13:27:08.599026 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-bvg5g" event={"ID":"9a636636-e68e-4e0f-ac55-b64f6e886b0e","Type":"ContainerStarted","Data":"c5c9ccacff32a7839fec7cfab6f7421d451cf6515cad80894e248ce0e8d8de6a"} Oct 14 13:27:08 crc kubenswrapper[4725]: I1014 13:27:08.599708 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-bvg5g" Oct 14 13:27:08 crc kubenswrapper[4725]: I1014 13:27:08.601795 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-nwf94" event={"ID":"5ed299ff-7f75-4376-a446-2f24b1d1e539","Type":"ContainerStarted","Data":"f2267c5d2434bcce1d0238f23f95ef327eb94aeaa39bbd381e316dcd33a20383"} Oct 14 13:27:08 crc kubenswrapper[4725]: I1014 13:27:08.603603 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-zxznn" event={"ID":"42b236b2-dcbb-4c0c-8916-1eba5e90f301","Type":"ContainerStarted","Data":"a62c9c8b9dff76f6061c37f09e11865f6c1e2e61708b7892e6585f278c62be35"} Oct 14 13:27:08 crc kubenswrapper[4725]: I1014 13:27:08.606941 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-5j272" event={"ID":"3f7ba899-2a43-4866-b3d7-34b6ca02b7e4","Type":"ContainerStarted","Data":"61805e971212996a162e09acecf6d55007bd1409c36af2d46732c87ce0528cc7"} Oct 14 13:27:08 crc kubenswrapper[4725]: I1014 13:27:08.607231 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-5j272" Oct 14 13:27:08 crc kubenswrapper[4725]: I1014 13:27:08.621147 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-bvg5g" podStartSLOduration=4.2031096 podStartE2EDuration="6.621121557s" podCreationTimestamp="2025-10-14 13:27:02 +0000 UTC" firstStartedPulling="2025-10-14 13:27:05.033481453 +0000 UTC m=+741.881916262" lastFinishedPulling="2025-10-14 13:27:07.45149341 +0000 UTC m=+744.299928219" observedRunningTime="2025-10-14 13:27:08.616915022 +0000 UTC m=+745.465349831" watchObservedRunningTime="2025-10-14 13:27:08.621121557 +0000 UTC m=+745.469556376" Oct 14 13:27:08 crc kubenswrapper[4725]: I1014 13:27:08.634194 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-nwf94" podStartSLOduration=2.452932845 podStartE2EDuration="6.634175664s" podCreationTimestamp="2025-10-14 13:27:02 +0000 UTC" firstStartedPulling="2025-10-14 13:27:03.210042876 +0000 UTC m=+740.058477685" lastFinishedPulling="2025-10-14 13:27:07.391285685 +0000 UTC m=+744.239720504" observedRunningTime="2025-10-14 13:27:08.633727231 +0000 UTC m=+745.482162030" watchObservedRunningTime="2025-10-14 13:27:08.634175664 +0000 UTC m=+745.482610463" Oct 14 13:27:08 crc kubenswrapper[4725]: I1014 13:27:08.649437 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-5j272" podStartSLOduration=2.120438951 podStartE2EDuration="6.649418599s" podCreationTimestamp="2025-10-14 13:27:02 +0000 UTC" firstStartedPulling="2025-10-14 13:27:02.917575237 +0000 UTC m=+739.766010046" lastFinishedPulling="2025-10-14 13:27:07.446554895 +0000 UTC m=+744.294989694" observedRunningTime="2025-10-14 13:27:08.647782595 +0000 UTC m=+745.496217424" watchObservedRunningTime="2025-10-14 13:27:08.649418599 +0000 UTC m=+745.497853418" Oct 14 13:27:11 crc kubenswrapper[4725]: I1014 13:27:11.381061 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-96fdf"] Oct 14 13:27:11 crc kubenswrapper[4725]: I1014 13:27:11.382799 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-96fdf" podUID="4bb4a9f2-74c0-401e-b880-bd17f95b00d2" containerName="controller-manager" containerID="cri-o://3cafaa302c96987c9d7e591aabfe3dcecfb6481bd33cce82016808bb8b1ff58e" gracePeriod=30 Oct 14 13:27:11 crc kubenswrapper[4725]: I1014 13:27:11.479748 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-462vw"] Oct 14 13:27:11 crc kubenswrapper[4725]: I1014 13:27:11.480631 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-462vw" podUID="ded0ec42-2430-4e32-909c-308aeef7c49a" containerName="route-controller-manager" containerID="cri-o://d3bf45d10317eabc4968caa5319d5a0cc2d85243e9e869a30c7884615c3a9347" gracePeriod=30 Oct 14 13:27:11 crc kubenswrapper[4725]: I1014 13:27:11.625184 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-zxznn" event={"ID":"42b236b2-dcbb-4c0c-8916-1eba5e90f301","Type":"ContainerStarted","Data":"47e1711cba90b7be807b7b28df5cf3c5e988a33742f0543e581af48f5e5985c9"} Oct 14 13:27:11 crc kubenswrapper[4725]: I1014 13:27:11.632656 4725 generic.go:334] "Generic (PLEG): container finished" podID="ded0ec42-2430-4e32-909c-308aeef7c49a" containerID="d3bf45d10317eabc4968caa5319d5a0cc2d85243e9e869a30c7884615c3a9347" exitCode=0 Oct 14 13:27:11 crc kubenswrapper[4725]: I1014 13:27:11.632773 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-462vw" event={"ID":"ded0ec42-2430-4e32-909c-308aeef7c49a","Type":"ContainerDied","Data":"d3bf45d10317eabc4968caa5319d5a0cc2d85243e9e869a30c7884615c3a9347"} Oct 14 13:27:11 crc kubenswrapper[4725]: I1014 13:27:11.634758 4725 generic.go:334] "Generic (PLEG): container finished" podID="4bb4a9f2-74c0-401e-b880-bd17f95b00d2" containerID="3cafaa302c96987c9d7e591aabfe3dcecfb6481bd33cce82016808bb8b1ff58e" exitCode=0 Oct 14 13:27:11 crc kubenswrapper[4725]: I1014 13:27:11.634782 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-96fdf" event={"ID":"4bb4a9f2-74c0-401e-b880-bd17f95b00d2","Type":"ContainerDied","Data":"3cafaa302c96987c9d7e591aabfe3dcecfb6481bd33cce82016808bb8b1ff58e"} Oct 14 13:27:11 crc kubenswrapper[4725]: I1014 13:27:11.642981 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-zxznn" podStartSLOduration=1.959531838 podStartE2EDuration="9.642965387s" podCreationTimestamp="2025-10-14 13:27:02 +0000 UTC" firstStartedPulling="2025-10-14 13:27:03.394761592 +0000 UTC m=+740.243196411" lastFinishedPulling="2025-10-14 13:27:11.078195151 +0000 UTC m=+747.926629960" observedRunningTime="2025-10-14 13:27:11.641436015 +0000 UTC m=+748.489870824" watchObservedRunningTime="2025-10-14 13:27:11.642965387 +0000 UTC m=+748.491400196" Oct 14 13:27:11 crc kubenswrapper[4725]: I1014 13:27:11.799588 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-96fdf" Oct 14 13:27:11 crc kubenswrapper[4725]: I1014 13:27:11.854327 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-462vw" Oct 14 13:27:11 crc kubenswrapper[4725]: I1014 13:27:11.997137 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bb4a9f2-74c0-401e-b880-bd17f95b00d2-serving-cert\") pod \"4bb4a9f2-74c0-401e-b880-bd17f95b00d2\" (UID: \"4bb4a9f2-74c0-401e-b880-bd17f95b00d2\") " Oct 14 13:27:11 crc kubenswrapper[4725]: I1014 13:27:11.997220 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bb4a9f2-74c0-401e-b880-bd17f95b00d2-config\") pod \"4bb4a9f2-74c0-401e-b880-bd17f95b00d2\" (UID: \"4bb4a9f2-74c0-401e-b880-bd17f95b00d2\") " Oct 14 13:27:11 crc kubenswrapper[4725]: I1014 13:27:11.997244 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4bb4a9f2-74c0-401e-b880-bd17f95b00d2-proxy-ca-bundles\") pod \"4bb4a9f2-74c0-401e-b880-bd17f95b00d2\" (UID: \"4bb4a9f2-74c0-401e-b880-bd17f95b00d2\") " Oct 14 13:27:11 crc kubenswrapper[4725]: I1014 13:27:11.997275 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ded0ec42-2430-4e32-909c-308aeef7c49a-client-ca\") pod \"ded0ec42-2430-4e32-909c-308aeef7c49a\" (UID: \"ded0ec42-2430-4e32-909c-308aeef7c49a\") " Oct 14 13:27:11 crc kubenswrapper[4725]: I1014 13:27:11.997300 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66pz6\" (UniqueName: \"kubernetes.io/projected/4bb4a9f2-74c0-401e-b880-bd17f95b00d2-kube-api-access-66pz6\") pod \"4bb4a9f2-74c0-401e-b880-bd17f95b00d2\" (UID: \"4bb4a9f2-74c0-401e-b880-bd17f95b00d2\") " Oct 14 13:27:11 crc kubenswrapper[4725]: I1014 13:27:11.997342 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4bb4a9f2-74c0-401e-b880-bd17f95b00d2-client-ca\") pod \"4bb4a9f2-74c0-401e-b880-bd17f95b00d2\" (UID: \"4bb4a9f2-74c0-401e-b880-bd17f95b00d2\") " Oct 14 13:27:11 crc kubenswrapper[4725]: I1014 13:27:11.997363 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lv4m\" (UniqueName: \"kubernetes.io/projected/ded0ec42-2430-4e32-909c-308aeef7c49a-kube-api-access-8lv4m\") pod \"ded0ec42-2430-4e32-909c-308aeef7c49a\" (UID: \"ded0ec42-2430-4e32-909c-308aeef7c49a\") " Oct 14 13:27:11 crc kubenswrapper[4725]: I1014 13:27:11.997416 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ded0ec42-2430-4e32-909c-308aeef7c49a-serving-cert\") pod \"ded0ec42-2430-4e32-909c-308aeef7c49a\" (UID: \"ded0ec42-2430-4e32-909c-308aeef7c49a\") " Oct 14 13:27:11 crc kubenswrapper[4725]: I1014 13:27:11.997571 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ded0ec42-2430-4e32-909c-308aeef7c49a-config\") pod \"ded0ec42-2430-4e32-909c-308aeef7c49a\" (UID: \"ded0ec42-2430-4e32-909c-308aeef7c49a\") " Oct 14 13:27:11 crc kubenswrapper[4725]: I1014 13:27:11.998278 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb4a9f2-74c0-401e-b880-bd17f95b00d2-client-ca" (OuterVolumeSpecName: "client-ca") pod "4bb4a9f2-74c0-401e-b880-bd17f95b00d2" (UID: "4bb4a9f2-74c0-401e-b880-bd17f95b00d2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:27:11 crc kubenswrapper[4725]: I1014 13:27:11.998303 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb4a9f2-74c0-401e-b880-bd17f95b00d2-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4bb4a9f2-74c0-401e-b880-bd17f95b00d2" (UID: "4bb4a9f2-74c0-401e-b880-bd17f95b00d2"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:27:11 crc kubenswrapper[4725]: I1014 13:27:11.998365 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb4a9f2-74c0-401e-b880-bd17f95b00d2-config" (OuterVolumeSpecName: "config") pod "4bb4a9f2-74c0-401e-b880-bd17f95b00d2" (UID: "4bb4a9f2-74c0-401e-b880-bd17f95b00d2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:27:11 crc kubenswrapper[4725]: I1014 13:27:11.998534 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ded0ec42-2430-4e32-909c-308aeef7c49a-client-ca" (OuterVolumeSpecName: "client-ca") pod "ded0ec42-2430-4e32-909c-308aeef7c49a" (UID: "ded0ec42-2430-4e32-909c-308aeef7c49a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:27:11 crc kubenswrapper[4725]: I1014 13:27:11.998565 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ded0ec42-2430-4e32-909c-308aeef7c49a-config" (OuterVolumeSpecName: "config") pod "ded0ec42-2430-4e32-909c-308aeef7c49a" (UID: "ded0ec42-2430-4e32-909c-308aeef7c49a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.002915 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bb4a9f2-74c0-401e-b880-bd17f95b00d2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4bb4a9f2-74c0-401e-b880-bd17f95b00d2" (UID: "4bb4a9f2-74c0-401e-b880-bd17f95b00d2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.002930 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ded0ec42-2430-4e32-909c-308aeef7c49a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ded0ec42-2430-4e32-909c-308aeef7c49a" (UID: "ded0ec42-2430-4e32-909c-308aeef7c49a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.003104 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb4a9f2-74c0-401e-b880-bd17f95b00d2-kube-api-access-66pz6" (OuterVolumeSpecName: "kube-api-access-66pz6") pod "4bb4a9f2-74c0-401e-b880-bd17f95b00d2" (UID: "4bb4a9f2-74c0-401e-b880-bd17f95b00d2"). InnerVolumeSpecName "kube-api-access-66pz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.004069 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ded0ec42-2430-4e32-909c-308aeef7c49a-kube-api-access-8lv4m" (OuterVolumeSpecName: "kube-api-access-8lv4m") pod "ded0ec42-2430-4e32-909c-308aeef7c49a" (UID: "ded0ec42-2430-4e32-909c-308aeef7c49a"). InnerVolumeSpecName "kube-api-access-8lv4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.098775 4725 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4bb4a9f2-74c0-401e-b880-bd17f95b00d2-client-ca\") on node \"crc\" DevicePath \"\"" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.098808 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lv4m\" (UniqueName: \"kubernetes.io/projected/ded0ec42-2430-4e32-909c-308aeef7c49a-kube-api-access-8lv4m\") on node \"crc\" DevicePath \"\"" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.098820 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ded0ec42-2430-4e32-909c-308aeef7c49a-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.098830 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ded0ec42-2430-4e32-909c-308aeef7c49a-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.098839 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4bb4a9f2-74c0-401e-b880-bd17f95b00d2-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.098847 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bb4a9f2-74c0-401e-b880-bd17f95b00d2-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.098854 4725 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4bb4a9f2-74c0-401e-b880-bd17f95b00d2-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.098863 4725 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ded0ec42-2430-4e32-909c-308aeef7c49a-client-ca\") on node \"crc\" DevicePath \"\"" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.098871 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66pz6\" (UniqueName: \"kubernetes.io/projected/4bb4a9f2-74c0-401e-b880-bd17f95b00d2-kube-api-access-66pz6\") on node \"crc\" DevicePath \"\"" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.642398 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-462vw" event={"ID":"ded0ec42-2430-4e32-909c-308aeef7c49a","Type":"ContainerDied","Data":"c39e78d293aadc95081d7f93f2091dc4de458864bec628ef6655b60b35a42eb2"} Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.642482 4725 scope.go:117] "RemoveContainer" containerID="d3bf45d10317eabc4968caa5319d5a0cc2d85243e9e869a30c7884615c3a9347" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.642565 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-462vw" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.645118 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-96fdf" event={"ID":"4bb4a9f2-74c0-401e-b880-bd17f95b00d2","Type":"ContainerDied","Data":"a72738f990f069adffe21122c814d062d74af3de371cfce48438d3a5098c8d79"} Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.645315 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-96fdf" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.658268 4725 scope.go:117] "RemoveContainer" containerID="3cafaa302c96987c9d7e591aabfe3dcecfb6481bd33cce82016808bb8b1ff58e" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.692384 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-96fdf"] Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.699365 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-96fdf"] Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.702822 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-462vw"] Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.705571 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-462vw"] Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.874342 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f6c64b4b5-6hrdc"] Oct 14 13:27:12 crc kubenswrapper[4725]: E1014 13:27:12.874724 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bb4a9f2-74c0-401e-b880-bd17f95b00d2" containerName="controller-manager" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.874754 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bb4a9f2-74c0-401e-b880-bd17f95b00d2" containerName="controller-manager" Oct 14 13:27:12 crc kubenswrapper[4725]: E1014 13:27:12.874776 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ded0ec42-2430-4e32-909c-308aeef7c49a" containerName="route-controller-manager" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.874788 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="ded0ec42-2430-4e32-909c-308aeef7c49a" containerName="route-controller-manager" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.875005 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="ded0ec42-2430-4e32-909c-308aeef7c49a" containerName="route-controller-manager" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.875044 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bb4a9f2-74c0-401e-b880-bd17f95b00d2" containerName="controller-manager" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.875652 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f6c64b4b5-6hrdc" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.877479 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.877935 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.877938 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.878810 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.878984 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.879577 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.880078 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cb7996754-5mddx"] Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.881121 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cb7996754-5mddx" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.884135 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.887254 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.890817 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.890942 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f6c64b4b5-6hrdc"] Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.891040 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.891126 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.891386 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.891545 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.911398 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-694xq\" (UniqueName: \"kubernetes.io/projected/0e947e59-22b7-47ec-b437-bfbefc33eb95-kube-api-access-694xq\") pod \"route-controller-manager-7cb7996754-5mddx\" (UID: \"0e947e59-22b7-47ec-b437-bfbefc33eb95\") " pod="openshift-route-controller-manager/route-controller-manager-7cb7996754-5mddx" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.911806 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cac2b93-777e-492d-a506-0318b1a5c2f1-config\") pod \"controller-manager-f6c64b4b5-6hrdc\" (UID: \"3cac2b93-777e-492d-a506-0318b1a5c2f1\") " pod="openshift-controller-manager/controller-manager-f6c64b4b5-6hrdc" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.911910 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e947e59-22b7-47ec-b437-bfbefc33eb95-serving-cert\") pod \"route-controller-manager-7cb7996754-5mddx\" (UID: \"0e947e59-22b7-47ec-b437-bfbefc33eb95\") " pod="openshift-route-controller-manager/route-controller-manager-7cb7996754-5mddx" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.911968 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3cac2b93-777e-492d-a506-0318b1a5c2f1-proxy-ca-bundles\") pod \"controller-manager-f6c64b4b5-6hrdc\" (UID: \"3cac2b93-777e-492d-a506-0318b1a5c2f1\") " pod="openshift-controller-manager/controller-manager-f6c64b4b5-6hrdc" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.911996 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92fhl\" (UniqueName: \"kubernetes.io/projected/3cac2b93-777e-492d-a506-0318b1a5c2f1-kube-api-access-92fhl\") pod \"controller-manager-f6c64b4b5-6hrdc\" (UID: \"3cac2b93-777e-492d-a506-0318b1a5c2f1\") " pod="openshift-controller-manager/controller-manager-f6c64b4b5-6hrdc" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.912031 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e947e59-22b7-47ec-b437-bfbefc33eb95-client-ca\") pod \"route-controller-manager-7cb7996754-5mddx\" (UID: \"0e947e59-22b7-47ec-b437-bfbefc33eb95\") " pod="openshift-route-controller-manager/route-controller-manager-7cb7996754-5mddx" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.912052 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3cac2b93-777e-492d-a506-0318b1a5c2f1-serving-cert\") pod \"controller-manager-f6c64b4b5-6hrdc\" (UID: \"3cac2b93-777e-492d-a506-0318b1a5c2f1\") " pod="openshift-controller-manager/controller-manager-f6c64b4b5-6hrdc" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.912088 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e947e59-22b7-47ec-b437-bfbefc33eb95-config\") pod \"route-controller-manager-7cb7996754-5mddx\" (UID: \"0e947e59-22b7-47ec-b437-bfbefc33eb95\") " pod="openshift-route-controller-manager/route-controller-manager-7cb7996754-5mddx" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.912111 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3cac2b93-777e-492d-a506-0318b1a5c2f1-client-ca\") pod \"controller-manager-f6c64b4b5-6hrdc\" (UID: \"3cac2b93-777e-492d-a506-0318b1a5c2f1\") " pod="openshift-controller-manager/controller-manager-f6c64b4b5-6hrdc" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.916831 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-5j272" Oct 14 13:27:12 crc kubenswrapper[4725]: I1014 13:27:12.917131 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cb7996754-5mddx"] Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.012934 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3cac2b93-777e-492d-a506-0318b1a5c2f1-proxy-ca-bundles\") pod \"controller-manager-f6c64b4b5-6hrdc\" (UID: \"3cac2b93-777e-492d-a506-0318b1a5c2f1\") " pod="openshift-controller-manager/controller-manager-f6c64b4b5-6hrdc" Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.012996 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92fhl\" (UniqueName: \"kubernetes.io/projected/3cac2b93-777e-492d-a506-0318b1a5c2f1-kube-api-access-92fhl\") pod \"controller-manager-f6c64b4b5-6hrdc\" (UID: \"3cac2b93-777e-492d-a506-0318b1a5c2f1\") " pod="openshift-controller-manager/controller-manager-f6c64b4b5-6hrdc" Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.013058 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e947e59-22b7-47ec-b437-bfbefc33eb95-client-ca\") pod \"route-controller-manager-7cb7996754-5mddx\" (UID: \"0e947e59-22b7-47ec-b437-bfbefc33eb95\") " pod="openshift-route-controller-manager/route-controller-manager-7cb7996754-5mddx" Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.013083 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3cac2b93-777e-492d-a506-0318b1a5c2f1-serving-cert\") pod \"controller-manager-f6c64b4b5-6hrdc\" (UID: \"3cac2b93-777e-492d-a506-0318b1a5c2f1\") " pod="openshift-controller-manager/controller-manager-f6c64b4b5-6hrdc" Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.013119 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e947e59-22b7-47ec-b437-bfbefc33eb95-config\") pod \"route-controller-manager-7cb7996754-5mddx\" (UID: \"0e947e59-22b7-47ec-b437-bfbefc33eb95\") " pod="openshift-route-controller-manager/route-controller-manager-7cb7996754-5mddx" Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.013136 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3cac2b93-777e-492d-a506-0318b1a5c2f1-client-ca\") pod \"controller-manager-f6c64b4b5-6hrdc\" (UID: \"3cac2b93-777e-492d-a506-0318b1a5c2f1\") " pod="openshift-controller-manager/controller-manager-f6c64b4b5-6hrdc" Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.013157 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-694xq\" (UniqueName: \"kubernetes.io/projected/0e947e59-22b7-47ec-b437-bfbefc33eb95-kube-api-access-694xq\") pod \"route-controller-manager-7cb7996754-5mddx\" (UID: \"0e947e59-22b7-47ec-b437-bfbefc33eb95\") " pod="openshift-route-controller-manager/route-controller-manager-7cb7996754-5mddx" Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.013189 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cac2b93-777e-492d-a506-0318b1a5c2f1-config\") pod \"controller-manager-f6c64b4b5-6hrdc\" (UID: \"3cac2b93-777e-492d-a506-0318b1a5c2f1\") " pod="openshift-controller-manager/controller-manager-f6c64b4b5-6hrdc" Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.013213 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e947e59-22b7-47ec-b437-bfbefc33eb95-serving-cert\") pod \"route-controller-manager-7cb7996754-5mddx\" (UID: \"0e947e59-22b7-47ec-b437-bfbefc33eb95\") " pod="openshift-route-controller-manager/route-controller-manager-7cb7996754-5mddx" Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.014051 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e947e59-22b7-47ec-b437-bfbefc33eb95-client-ca\") pod \"route-controller-manager-7cb7996754-5mddx\" (UID: \"0e947e59-22b7-47ec-b437-bfbefc33eb95\") " pod="openshift-route-controller-manager/route-controller-manager-7cb7996754-5mddx" Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.014564 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3cac2b93-777e-492d-a506-0318b1a5c2f1-client-ca\") pod \"controller-manager-f6c64b4b5-6hrdc\" (UID: \"3cac2b93-777e-492d-a506-0318b1a5c2f1\") " pod="openshift-controller-manager/controller-manager-f6c64b4b5-6hrdc" Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.014762 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e947e59-22b7-47ec-b437-bfbefc33eb95-config\") pod \"route-controller-manager-7cb7996754-5mddx\" (UID: \"0e947e59-22b7-47ec-b437-bfbefc33eb95\") " pod="openshift-route-controller-manager/route-controller-manager-7cb7996754-5mddx" Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.015314 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cac2b93-777e-492d-a506-0318b1a5c2f1-config\") pod \"controller-manager-f6c64b4b5-6hrdc\" (UID: \"3cac2b93-777e-492d-a506-0318b1a5c2f1\") " pod="openshift-controller-manager/controller-manager-f6c64b4b5-6hrdc" Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.017814 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3cac2b93-777e-492d-a506-0318b1a5c2f1-proxy-ca-bundles\") pod \"controller-manager-f6c64b4b5-6hrdc\" (UID: \"3cac2b93-777e-492d-a506-0318b1a5c2f1\") " pod="openshift-controller-manager/controller-manager-f6c64b4b5-6hrdc" Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.021420 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3cac2b93-777e-492d-a506-0318b1a5c2f1-serving-cert\") pod \"controller-manager-f6c64b4b5-6hrdc\" (UID: \"3cac2b93-777e-492d-a506-0318b1a5c2f1\") " pod="openshift-controller-manager/controller-manager-f6c64b4b5-6hrdc" Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.021424 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e947e59-22b7-47ec-b437-bfbefc33eb95-serving-cert\") pod \"route-controller-manager-7cb7996754-5mddx\" (UID: \"0e947e59-22b7-47ec-b437-bfbefc33eb95\") " pod="openshift-route-controller-manager/route-controller-manager-7cb7996754-5mddx" Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.032377 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92fhl\" (UniqueName: \"kubernetes.io/projected/3cac2b93-777e-492d-a506-0318b1a5c2f1-kube-api-access-92fhl\") pod \"controller-manager-f6c64b4b5-6hrdc\" (UID: \"3cac2b93-777e-492d-a506-0318b1a5c2f1\") " pod="openshift-controller-manager/controller-manager-f6c64b4b5-6hrdc" Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.035631 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-694xq\" (UniqueName: \"kubernetes.io/projected/0e947e59-22b7-47ec-b437-bfbefc33eb95-kube-api-access-694xq\") pod \"route-controller-manager-7cb7996754-5mddx\" (UID: \"0e947e59-22b7-47ec-b437-bfbefc33eb95\") " pod="openshift-route-controller-manager/route-controller-manager-7cb7996754-5mddx" Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.203034 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f6c64b4b5-6hrdc" Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.217692 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cb7996754-5mddx" Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.244081 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5658697489-hvjvm" Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.244118 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5658697489-hvjvm" Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.249892 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5658697489-hvjvm" Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.426773 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f6c64b4b5-6hrdc"] Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.651860 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f6c64b4b5-6hrdc" event={"ID":"3cac2b93-777e-492d-a506-0318b1a5c2f1","Type":"ContainerStarted","Data":"5fef306e6981dfa5c61f73f277d49420c7387674c2c8d7ea7b47183153a68ff7"} Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.651901 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f6c64b4b5-6hrdc" event={"ID":"3cac2b93-777e-492d-a506-0318b1a5c2f1","Type":"ContainerStarted","Data":"0d6eeb653cba9f9d524d9e2a3142f579ba8c7af732ebffaaa66d0c4d1c616c3d"} Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.652196 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f6c64b4b5-6hrdc" Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.657434 4725 patch_prober.go:28] interesting pod/controller-manager-f6c64b4b5-6hrdc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.46:8443/healthz\": dial tcp 10.217.0.46:8443: connect: connection refused" start-of-body= Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.657502 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-f6c64b4b5-6hrdc" podUID="3cac2b93-777e-492d-a506-0318b1a5c2f1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.46:8443/healthz\": dial tcp 10.217.0.46:8443: connect: connection refused" Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.659322 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5658697489-hvjvm" Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.673576 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cb7996754-5mddx"] Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.675412 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-f6c64b4b5-6hrdc" podStartSLOduration=2.675393152 podStartE2EDuration="2.675393152s" podCreationTimestamp="2025-10-14 13:27:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:27:13.667579699 +0000 UTC m=+750.516014518" watchObservedRunningTime="2025-10-14 13:27:13.675393152 +0000 UTC m=+750.523827971" Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.747379 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-dbshk"] Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.943294 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb4a9f2-74c0-401e-b880-bd17f95b00d2" path="/var/lib/kubelet/pods/4bb4a9f2-74c0-401e-b880-bd17f95b00d2/volumes" Oct 14 13:27:13 crc kubenswrapper[4725]: I1014 13:27:13.943906 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ded0ec42-2430-4e32-909c-308aeef7c49a" path="/var/lib/kubelet/pods/ded0ec42-2430-4e32-909c-308aeef7c49a/volumes" Oct 14 13:27:14 crc kubenswrapper[4725]: I1014 13:27:14.662469 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cb7996754-5mddx" event={"ID":"0e947e59-22b7-47ec-b437-bfbefc33eb95","Type":"ContainerStarted","Data":"1c3fea18dad51994a4fd7f043450a9042a2ea9706e488be80eda5e7bb1514061"} Oct 14 13:27:14 crc kubenswrapper[4725]: I1014 13:27:14.663026 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cb7996754-5mddx" event={"ID":"0e947e59-22b7-47ec-b437-bfbefc33eb95","Type":"ContainerStarted","Data":"ec5f6e091199c0b8f65e4f38c23c36bb5b521b92b581c54123fc9f1cb97b9cf0"} Oct 14 13:27:14 crc kubenswrapper[4725]: I1014 13:27:14.667123 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f6c64b4b5-6hrdc" Oct 14 13:27:14 crc kubenswrapper[4725]: I1014 13:27:14.689064 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7cb7996754-5mddx" podStartSLOduration=3.6890411690000002 podStartE2EDuration="3.689041169s" podCreationTimestamp="2025-10-14 13:27:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:27:14.687095826 +0000 UTC m=+751.535530645" watchObservedRunningTime="2025-10-14 13:27:14.689041169 +0000 UTC m=+751.537475978" Oct 14 13:27:15 crc kubenswrapper[4725]: I1014 13:27:15.668749 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7cb7996754-5mddx" Oct 14 13:27:15 crc kubenswrapper[4725]: I1014 13:27:15.675613 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7cb7996754-5mddx" Oct 14 13:27:22 crc kubenswrapper[4725]: I1014 13:27:22.145916 4725 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 14 13:27:24 crc kubenswrapper[4725]: I1014 13:27:24.680814 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-bvg5g" Oct 14 13:27:32 crc kubenswrapper[4725]: I1014 13:27:32.520720 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 13:27:32 crc kubenswrapper[4725]: I1014 13:27:32.521386 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 13:27:37 crc kubenswrapper[4725]: I1014 13:27:37.394369 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg"] Oct 14 13:27:37 crc kubenswrapper[4725]: I1014 13:27:37.395909 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg" Oct 14 13:27:37 crc kubenswrapper[4725]: I1014 13:27:37.404564 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 14 13:27:37 crc kubenswrapper[4725]: I1014 13:27:37.405027 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg"] Oct 14 13:27:37 crc kubenswrapper[4725]: I1014 13:27:37.440473 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f11c88bf-9dff-4a1e-825d-bcaae865c70d-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg\" (UID: \"f11c88bf-9dff-4a1e-825d-bcaae865c70d\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg" Oct 14 13:27:37 crc kubenswrapper[4725]: I1014 13:27:37.440729 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58n2c\" (UniqueName: \"kubernetes.io/projected/f11c88bf-9dff-4a1e-825d-bcaae865c70d-kube-api-access-58n2c\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg\" (UID: \"f11c88bf-9dff-4a1e-825d-bcaae865c70d\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg" Oct 14 13:27:37 crc kubenswrapper[4725]: I1014 13:27:37.440769 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f11c88bf-9dff-4a1e-825d-bcaae865c70d-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg\" (UID: \"f11c88bf-9dff-4a1e-825d-bcaae865c70d\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg" Oct 14 13:27:37 crc kubenswrapper[4725]: I1014 13:27:37.542510 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f11c88bf-9dff-4a1e-825d-bcaae865c70d-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg\" (UID: \"f11c88bf-9dff-4a1e-825d-bcaae865c70d\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg" Oct 14 13:27:37 crc kubenswrapper[4725]: I1014 13:27:37.542584 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58n2c\" (UniqueName: \"kubernetes.io/projected/f11c88bf-9dff-4a1e-825d-bcaae865c70d-kube-api-access-58n2c\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg\" (UID: \"f11c88bf-9dff-4a1e-825d-bcaae865c70d\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg" Oct 14 13:27:37 crc kubenswrapper[4725]: I1014 13:27:37.542622 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f11c88bf-9dff-4a1e-825d-bcaae865c70d-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg\" (UID: \"f11c88bf-9dff-4a1e-825d-bcaae865c70d\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg" Oct 14 13:27:37 crc kubenswrapper[4725]: I1014 13:27:37.543251 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f11c88bf-9dff-4a1e-825d-bcaae865c70d-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg\" (UID: \"f11c88bf-9dff-4a1e-825d-bcaae865c70d\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg" Oct 14 13:27:37 crc kubenswrapper[4725]: I1014 13:27:37.544790 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f11c88bf-9dff-4a1e-825d-bcaae865c70d-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg\" (UID: \"f11c88bf-9dff-4a1e-825d-bcaae865c70d\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg" Oct 14 13:27:37 crc kubenswrapper[4725]: I1014 13:27:37.563623 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58n2c\" (UniqueName: \"kubernetes.io/projected/f11c88bf-9dff-4a1e-825d-bcaae865c70d-kube-api-access-58n2c\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg\" (UID: \"f11c88bf-9dff-4a1e-825d-bcaae865c70d\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg" Oct 14 13:27:37 crc kubenswrapper[4725]: I1014 13:27:37.713715 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg" Oct 14 13:27:38 crc kubenswrapper[4725]: I1014 13:27:38.177100 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg"] Oct 14 13:27:38 crc kubenswrapper[4725]: I1014 13:27:38.798597 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-dbshk" podUID="9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9" containerName="console" containerID="cri-o://7c0b62ef5c6aa141f16a2227f2a5f4dd9c305cc041cbc58c134e303c6b700b75" gracePeriod=15 Oct 14 13:27:38 crc kubenswrapper[4725]: I1014 13:27:38.802630 4725 generic.go:334] "Generic (PLEG): container finished" podID="f11c88bf-9dff-4a1e-825d-bcaae865c70d" containerID="46b6091e2e61582cbdc1452124fe06834a695fad4ed9c2504761d52c7c0c73ef" exitCode=0 Oct 14 13:27:38 crc kubenswrapper[4725]: I1014 13:27:38.802717 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg" event={"ID":"f11c88bf-9dff-4a1e-825d-bcaae865c70d","Type":"ContainerDied","Data":"46b6091e2e61582cbdc1452124fe06834a695fad4ed9c2504761d52c7c0c73ef"} Oct 14 13:27:38 crc kubenswrapper[4725]: I1014 13:27:38.802783 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg" event={"ID":"f11c88bf-9dff-4a1e-825d-bcaae865c70d","Type":"ContainerStarted","Data":"85fd4b344d8c5d7486ef471964f3b2c0374efd09319ae09a459ec6fee996962a"} Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.342411 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-dbshk_9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9/console/0.log" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.342586 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dbshk" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.467979 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-oauth-serving-cert\") pod \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\" (UID: \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\") " Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.468087 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-service-ca\") pod \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\" (UID: \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\") " Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.468129 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxdlz\" (UniqueName: \"kubernetes.io/projected/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-kube-api-access-wxdlz\") pod \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\" (UID: \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\") " Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.468190 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-console-config\") pod \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\" (UID: \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\") " Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.468219 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-console-oauth-config\") pod \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\" (UID: \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\") " Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.468242 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-trusted-ca-bundle\") pod \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\" (UID: \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\") " Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.468269 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-console-serving-cert\") pod \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\" (UID: \"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9\") " Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.468825 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9" (UID: "9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.468968 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-console-config" (OuterVolumeSpecName: "console-config") pod "9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9" (UID: "9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.469001 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-service-ca" (OuterVolumeSpecName: "service-ca") pod "9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9" (UID: "9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.469371 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9" (UID: "9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.474015 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-kube-api-access-wxdlz" (OuterVolumeSpecName: "kube-api-access-wxdlz") pod "9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9" (UID: "9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9"). InnerVolumeSpecName "kube-api-access-wxdlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.474422 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9" (UID: "9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.474538 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9" (UID: "9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.569587 4725 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-console-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.569627 4725 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.569641 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.569652 4725 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.569664 4725 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.569677 4725 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-service-ca\") on node \"crc\" DevicePath \"\"" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.569688 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxdlz\" (UniqueName: \"kubernetes.io/projected/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9-kube-api-access-wxdlz\") on node \"crc\" DevicePath \"\"" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.724786 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6ssjz"] Oct 14 13:27:39 crc kubenswrapper[4725]: E1014 13:27:39.725275 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9" containerName="console" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.725353 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9" containerName="console" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.725584 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9" containerName="console" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.726524 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ssjz" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.734785 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6ssjz"] Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.772150 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17-utilities\") pod \"redhat-operators-6ssjz\" (UID: \"e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17\") " pod="openshift-marketplace/redhat-operators-6ssjz" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.772217 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17-catalog-content\") pod \"redhat-operators-6ssjz\" (UID: \"e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17\") " pod="openshift-marketplace/redhat-operators-6ssjz" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.772256 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8jr2\" (UniqueName: \"kubernetes.io/projected/e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17-kube-api-access-q8jr2\") pod \"redhat-operators-6ssjz\" (UID: \"e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17\") " pod="openshift-marketplace/redhat-operators-6ssjz" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.810016 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-dbshk_9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9/console/0.log" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.810076 4725 generic.go:334] "Generic (PLEG): container finished" podID="9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9" containerID="7c0b62ef5c6aa141f16a2227f2a5f4dd9c305cc041cbc58c134e303c6b700b75" exitCode=2 Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.810113 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dbshk" event={"ID":"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9","Type":"ContainerDied","Data":"7c0b62ef5c6aa141f16a2227f2a5f4dd9c305cc041cbc58c134e303c6b700b75"} Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.810143 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dbshk" event={"ID":"9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9","Type":"ContainerDied","Data":"f9c175e3e9d30619e7e0ce4c384cb7042becf31a458a23871c44d73b87245a25"} Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.810146 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dbshk" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.810196 4725 scope.go:117] "RemoveContainer" containerID="7c0b62ef5c6aa141f16a2227f2a5f4dd9c305cc041cbc58c134e303c6b700b75" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.840521 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-dbshk"] Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.842279 4725 scope.go:117] "RemoveContainer" containerID="7c0b62ef5c6aa141f16a2227f2a5f4dd9c305cc041cbc58c134e303c6b700b75" Oct 14 13:27:39 crc kubenswrapper[4725]: E1014 13:27:39.842757 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c0b62ef5c6aa141f16a2227f2a5f4dd9c305cc041cbc58c134e303c6b700b75\": container with ID starting with 7c0b62ef5c6aa141f16a2227f2a5f4dd9c305cc041cbc58c134e303c6b700b75 not found: ID does not exist" containerID="7c0b62ef5c6aa141f16a2227f2a5f4dd9c305cc041cbc58c134e303c6b700b75" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.842805 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c0b62ef5c6aa141f16a2227f2a5f4dd9c305cc041cbc58c134e303c6b700b75"} err="failed to get container status \"7c0b62ef5c6aa141f16a2227f2a5f4dd9c305cc041cbc58c134e303c6b700b75\": rpc error: code = NotFound desc = could not find container \"7c0b62ef5c6aa141f16a2227f2a5f4dd9c305cc041cbc58c134e303c6b700b75\": container with ID starting with 7c0b62ef5c6aa141f16a2227f2a5f4dd9c305cc041cbc58c134e303c6b700b75 not found: ID does not exist" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.845516 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-dbshk"] Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.873834 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8jr2\" (UniqueName: \"kubernetes.io/projected/e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17-kube-api-access-q8jr2\") pod \"redhat-operators-6ssjz\" (UID: \"e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17\") " pod="openshift-marketplace/redhat-operators-6ssjz" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.874189 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17-utilities\") pod \"redhat-operators-6ssjz\" (UID: \"e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17\") " pod="openshift-marketplace/redhat-operators-6ssjz" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.874332 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17-catalog-content\") pod \"redhat-operators-6ssjz\" (UID: \"e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17\") " pod="openshift-marketplace/redhat-operators-6ssjz" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.874674 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17-catalog-content\") pod \"redhat-operators-6ssjz\" (UID: \"e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17\") " pod="openshift-marketplace/redhat-operators-6ssjz" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.874674 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17-utilities\") pod \"redhat-operators-6ssjz\" (UID: \"e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17\") " pod="openshift-marketplace/redhat-operators-6ssjz" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.891930 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8jr2\" (UniqueName: \"kubernetes.io/projected/e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17-kube-api-access-q8jr2\") pod \"redhat-operators-6ssjz\" (UID: \"e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17\") " pod="openshift-marketplace/redhat-operators-6ssjz" Oct 14 13:27:39 crc kubenswrapper[4725]: I1014 13:27:39.929007 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9" path="/var/lib/kubelet/pods/9a73a0c0-fc79-46f3-a0c6-96b93e65a3e9/volumes" Oct 14 13:27:40 crc kubenswrapper[4725]: I1014 13:27:40.053223 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ssjz" Oct 14 13:27:40 crc kubenswrapper[4725]: I1014 13:27:40.443651 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6ssjz"] Oct 14 13:27:40 crc kubenswrapper[4725]: W1014 13:27:40.452496 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3eac4d4_a5db_48de_9fb0_f9a5b9f74e17.slice/crio-fff28504f505ef4707ab07934fb9ae45cfcc0ea25f4575ae4a7a15dbb6918204 WatchSource:0}: Error finding container fff28504f505ef4707ab07934fb9ae45cfcc0ea25f4575ae4a7a15dbb6918204: Status 404 returned error can't find the container with id fff28504f505ef4707ab07934fb9ae45cfcc0ea25f4575ae4a7a15dbb6918204 Oct 14 13:27:40 crc kubenswrapper[4725]: I1014 13:27:40.816925 4725 generic.go:334] "Generic (PLEG): container finished" podID="e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17" containerID="5676605aff24671c026fa2ab2104437623d928a148cd4ffb185b50232a22ca5d" exitCode=0 Oct 14 13:27:40 crc kubenswrapper[4725]: I1014 13:27:40.816962 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ssjz" event={"ID":"e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17","Type":"ContainerDied","Data":"5676605aff24671c026fa2ab2104437623d928a148cd4ffb185b50232a22ca5d"} Oct 14 13:27:40 crc kubenswrapper[4725]: I1014 13:27:40.816985 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ssjz" event={"ID":"e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17","Type":"ContainerStarted","Data":"fff28504f505ef4707ab07934fb9ae45cfcc0ea25f4575ae4a7a15dbb6918204"} Oct 14 13:27:42 crc kubenswrapper[4725]: I1014 13:27:42.844282 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ssjz" event={"ID":"e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17","Type":"ContainerStarted","Data":"52c4afe0a54d4abb10aef00a79654ac9ab6273ccd9bcc80b0bae52d75098cb2a"} Oct 14 13:27:42 crc kubenswrapper[4725]: I1014 13:27:42.849075 4725 generic.go:334] "Generic (PLEG): container finished" podID="f11c88bf-9dff-4a1e-825d-bcaae865c70d" containerID="a7c281ebf9387a646f486f4e7fd90630cc5f822474d5b973df525bc8081c52d3" exitCode=0 Oct 14 13:27:42 crc kubenswrapper[4725]: I1014 13:27:42.849136 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg" event={"ID":"f11c88bf-9dff-4a1e-825d-bcaae865c70d","Type":"ContainerDied","Data":"a7c281ebf9387a646f486f4e7fd90630cc5f822474d5b973df525bc8081c52d3"} Oct 14 13:27:43 crc kubenswrapper[4725]: I1014 13:27:43.859494 4725 generic.go:334] "Generic (PLEG): container finished" podID="f11c88bf-9dff-4a1e-825d-bcaae865c70d" containerID="126df75ebfdea7a65aef8d31591bf48a8dffde267663a39511a8404450978c98" exitCode=0 Oct 14 13:27:43 crc kubenswrapper[4725]: I1014 13:27:43.859607 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg" event={"ID":"f11c88bf-9dff-4a1e-825d-bcaae865c70d","Type":"ContainerDied","Data":"126df75ebfdea7a65aef8d31591bf48a8dffde267663a39511a8404450978c98"} Oct 14 13:27:43 crc kubenswrapper[4725]: I1014 13:27:43.862303 4725 generic.go:334] "Generic (PLEG): container finished" podID="e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17" containerID="52c4afe0a54d4abb10aef00a79654ac9ab6273ccd9bcc80b0bae52d75098cb2a" exitCode=0 Oct 14 13:27:43 crc kubenswrapper[4725]: I1014 13:27:43.862509 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ssjz" event={"ID":"e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17","Type":"ContainerDied","Data":"52c4afe0a54d4abb10aef00a79654ac9ab6273ccd9bcc80b0bae52d75098cb2a"} Oct 14 13:27:44 crc kubenswrapper[4725]: I1014 13:27:44.871623 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ssjz" event={"ID":"e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17","Type":"ContainerStarted","Data":"7a91e1b942349f1f8eb5bee84b2f9795ba13b56b78734b2bb430b91c15512714"} Oct 14 13:27:44 crc kubenswrapper[4725]: I1014 13:27:44.898731 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6ssjz" podStartSLOduration=3.057200683 podStartE2EDuration="5.898710649s" podCreationTimestamp="2025-10-14 13:27:39 +0000 UTC" firstStartedPulling="2025-10-14 13:27:41.837658417 +0000 UTC m=+778.686093226" lastFinishedPulling="2025-10-14 13:27:44.679168343 +0000 UTC m=+781.527603192" observedRunningTime="2025-10-14 13:27:44.898111483 +0000 UTC m=+781.746546332" watchObservedRunningTime="2025-10-14 13:27:44.898710649 +0000 UTC m=+781.747145468" Oct 14 13:27:45 crc kubenswrapper[4725]: I1014 13:27:45.233611 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg" Oct 14 13:27:45 crc kubenswrapper[4725]: I1014 13:27:45.251126 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f11c88bf-9dff-4a1e-825d-bcaae865c70d-bundle\") pod \"f11c88bf-9dff-4a1e-825d-bcaae865c70d\" (UID: \"f11c88bf-9dff-4a1e-825d-bcaae865c70d\") " Oct 14 13:27:45 crc kubenswrapper[4725]: I1014 13:27:45.251200 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f11c88bf-9dff-4a1e-825d-bcaae865c70d-util\") pod \"f11c88bf-9dff-4a1e-825d-bcaae865c70d\" (UID: \"f11c88bf-9dff-4a1e-825d-bcaae865c70d\") " Oct 14 13:27:45 crc kubenswrapper[4725]: I1014 13:27:45.251301 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58n2c\" (UniqueName: \"kubernetes.io/projected/f11c88bf-9dff-4a1e-825d-bcaae865c70d-kube-api-access-58n2c\") pod \"f11c88bf-9dff-4a1e-825d-bcaae865c70d\" (UID: \"f11c88bf-9dff-4a1e-825d-bcaae865c70d\") " Oct 14 13:27:45 crc kubenswrapper[4725]: I1014 13:27:45.252323 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f11c88bf-9dff-4a1e-825d-bcaae865c70d-bundle" (OuterVolumeSpecName: "bundle") pod "f11c88bf-9dff-4a1e-825d-bcaae865c70d" (UID: "f11c88bf-9dff-4a1e-825d-bcaae865c70d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:27:45 crc kubenswrapper[4725]: I1014 13:27:45.259901 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f11c88bf-9dff-4a1e-825d-bcaae865c70d-kube-api-access-58n2c" (OuterVolumeSpecName: "kube-api-access-58n2c") pod "f11c88bf-9dff-4a1e-825d-bcaae865c70d" (UID: "f11c88bf-9dff-4a1e-825d-bcaae865c70d"). InnerVolumeSpecName "kube-api-access-58n2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:27:45 crc kubenswrapper[4725]: I1014 13:27:45.263048 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f11c88bf-9dff-4a1e-825d-bcaae865c70d-util" (OuterVolumeSpecName: "util") pod "f11c88bf-9dff-4a1e-825d-bcaae865c70d" (UID: "f11c88bf-9dff-4a1e-825d-bcaae865c70d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:27:45 crc kubenswrapper[4725]: I1014 13:27:45.353403 4725 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f11c88bf-9dff-4a1e-825d-bcaae865c70d-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:27:45 crc kubenswrapper[4725]: I1014 13:27:45.353444 4725 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f11c88bf-9dff-4a1e-825d-bcaae865c70d-util\") on node \"crc\" DevicePath \"\"" Oct 14 13:27:45 crc kubenswrapper[4725]: I1014 13:27:45.353499 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58n2c\" (UniqueName: \"kubernetes.io/projected/f11c88bf-9dff-4a1e-825d-bcaae865c70d-kube-api-access-58n2c\") on node \"crc\" DevicePath \"\"" Oct 14 13:27:45 crc kubenswrapper[4725]: I1014 13:27:45.894765 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg" event={"ID":"f11c88bf-9dff-4a1e-825d-bcaae865c70d","Type":"ContainerDied","Data":"85fd4b344d8c5d7486ef471964f3b2c0374efd09319ae09a459ec6fee996962a"} Oct 14 13:27:45 crc kubenswrapper[4725]: I1014 13:27:45.896047 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85fd4b344d8c5d7486ef471964f3b2c0374efd09319ae09a459ec6fee996962a" Oct 14 13:27:45 crc kubenswrapper[4725]: I1014 13:27:45.894844 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg" Oct 14 13:27:50 crc kubenswrapper[4725]: I1014 13:27:50.054427 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6ssjz" Oct 14 13:27:50 crc kubenswrapper[4725]: I1014 13:27:50.054771 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6ssjz" Oct 14 13:27:50 crc kubenswrapper[4725]: I1014 13:27:50.094806 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6ssjz" Oct 14 13:27:50 crc kubenswrapper[4725]: I1014 13:27:50.964063 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6ssjz" Oct 14 13:27:53 crc kubenswrapper[4725]: I1014 13:27:53.310558 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6ssjz"] Oct 14 13:27:53 crc kubenswrapper[4725]: I1014 13:27:53.310775 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6ssjz" podUID="e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17" containerName="registry-server" containerID="cri-o://7a91e1b942349f1f8eb5bee84b2f9795ba13b56b78734b2bb430b91c15512714" gracePeriod=2 Oct 14 13:27:53 crc kubenswrapper[4725]: I1014 13:27:53.523285 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nrrl8"] Oct 14 13:27:53 crc kubenswrapper[4725]: E1014 13:27:53.523521 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f11c88bf-9dff-4a1e-825d-bcaae865c70d" containerName="pull" Oct 14 13:27:53 crc kubenswrapper[4725]: I1014 13:27:53.523534 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f11c88bf-9dff-4a1e-825d-bcaae865c70d" containerName="pull" Oct 14 13:27:53 crc kubenswrapper[4725]: E1014 13:27:53.523549 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f11c88bf-9dff-4a1e-825d-bcaae865c70d" containerName="util" Oct 14 13:27:53 crc kubenswrapper[4725]: I1014 13:27:53.523569 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f11c88bf-9dff-4a1e-825d-bcaae865c70d" containerName="util" Oct 14 13:27:53 crc kubenswrapper[4725]: E1014 13:27:53.523577 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f11c88bf-9dff-4a1e-825d-bcaae865c70d" containerName="extract" Oct 14 13:27:53 crc kubenswrapper[4725]: I1014 13:27:53.523583 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f11c88bf-9dff-4a1e-825d-bcaae865c70d" containerName="extract" Oct 14 13:27:53 crc kubenswrapper[4725]: I1014 13:27:53.523679 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f11c88bf-9dff-4a1e-825d-bcaae865c70d" containerName="extract" Oct 14 13:27:53 crc kubenswrapper[4725]: I1014 13:27:53.524414 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nrrl8" Oct 14 13:27:53 crc kubenswrapper[4725]: I1014 13:27:53.560782 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdr6w\" (UniqueName: \"kubernetes.io/projected/fde5463f-e7e3-4cba-b5c2-6822b774f682-kube-api-access-gdr6w\") pod \"redhat-marketplace-nrrl8\" (UID: \"fde5463f-e7e3-4cba-b5c2-6822b774f682\") " pod="openshift-marketplace/redhat-marketplace-nrrl8" Oct 14 13:27:53 crc kubenswrapper[4725]: I1014 13:27:53.560841 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fde5463f-e7e3-4cba-b5c2-6822b774f682-catalog-content\") pod \"redhat-marketplace-nrrl8\" (UID: \"fde5463f-e7e3-4cba-b5c2-6822b774f682\") " pod="openshift-marketplace/redhat-marketplace-nrrl8" Oct 14 13:27:53 crc kubenswrapper[4725]: I1014 13:27:53.560879 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fde5463f-e7e3-4cba-b5c2-6822b774f682-utilities\") pod \"redhat-marketplace-nrrl8\" (UID: \"fde5463f-e7e3-4cba-b5c2-6822b774f682\") " pod="openshift-marketplace/redhat-marketplace-nrrl8" Oct 14 13:27:53 crc kubenswrapper[4725]: I1014 13:27:53.569728 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nrrl8"] Oct 14 13:27:53 crc kubenswrapper[4725]: E1014 13:27:53.601544 4725 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3eac4d4_a5db_48de_9fb0_f9a5b9f74e17.slice/crio-7a91e1b942349f1f8eb5bee84b2f9795ba13b56b78734b2bb430b91c15512714.scope\": RecentStats: unable to find data in memory cache]" Oct 14 13:27:53 crc kubenswrapper[4725]: I1014 13:27:53.662515 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdr6w\" (UniqueName: \"kubernetes.io/projected/fde5463f-e7e3-4cba-b5c2-6822b774f682-kube-api-access-gdr6w\") pod \"redhat-marketplace-nrrl8\" (UID: \"fde5463f-e7e3-4cba-b5c2-6822b774f682\") " pod="openshift-marketplace/redhat-marketplace-nrrl8" Oct 14 13:27:53 crc kubenswrapper[4725]: I1014 13:27:53.662786 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fde5463f-e7e3-4cba-b5c2-6822b774f682-catalog-content\") pod \"redhat-marketplace-nrrl8\" (UID: \"fde5463f-e7e3-4cba-b5c2-6822b774f682\") " pod="openshift-marketplace/redhat-marketplace-nrrl8" Oct 14 13:27:53 crc kubenswrapper[4725]: I1014 13:27:53.662817 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fde5463f-e7e3-4cba-b5c2-6822b774f682-utilities\") pod \"redhat-marketplace-nrrl8\" (UID: \"fde5463f-e7e3-4cba-b5c2-6822b774f682\") " pod="openshift-marketplace/redhat-marketplace-nrrl8" Oct 14 13:27:53 crc kubenswrapper[4725]: I1014 13:27:53.663378 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fde5463f-e7e3-4cba-b5c2-6822b774f682-utilities\") pod \"redhat-marketplace-nrrl8\" (UID: \"fde5463f-e7e3-4cba-b5c2-6822b774f682\") " pod="openshift-marketplace/redhat-marketplace-nrrl8" Oct 14 13:27:53 crc kubenswrapper[4725]: I1014 13:27:53.663469 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fde5463f-e7e3-4cba-b5c2-6822b774f682-catalog-content\") pod \"redhat-marketplace-nrrl8\" (UID: \"fde5463f-e7e3-4cba-b5c2-6822b774f682\") " pod="openshift-marketplace/redhat-marketplace-nrrl8" Oct 14 13:27:53 crc kubenswrapper[4725]: I1014 13:27:53.681123 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdr6w\" (UniqueName: \"kubernetes.io/projected/fde5463f-e7e3-4cba-b5c2-6822b774f682-kube-api-access-gdr6w\") pod \"redhat-marketplace-nrrl8\" (UID: \"fde5463f-e7e3-4cba-b5c2-6822b774f682\") " pod="openshift-marketplace/redhat-marketplace-nrrl8" Oct 14 13:27:53 crc kubenswrapper[4725]: I1014 13:27:53.846344 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nrrl8" Oct 14 13:27:53 crc kubenswrapper[4725]: I1014 13:27:53.943075 4725 generic.go:334] "Generic (PLEG): container finished" podID="e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17" containerID="7a91e1b942349f1f8eb5bee84b2f9795ba13b56b78734b2bb430b91c15512714" exitCode=0 Oct 14 13:27:53 crc kubenswrapper[4725]: I1014 13:27:53.943331 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ssjz" event={"ID":"e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17","Type":"ContainerDied","Data":"7a91e1b942349f1f8eb5bee84b2f9795ba13b56b78734b2bb430b91c15512714"} Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.397878 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nrrl8"] Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.436890 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ssjz" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.454357 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5c787f6f6d-k5ls2"] Oct 14 13:27:54 crc kubenswrapper[4725]: E1014 13:27:54.454662 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17" containerName="registry-server" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.454676 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17" containerName="registry-server" Oct 14 13:27:54 crc kubenswrapper[4725]: E1014 13:27:54.454693 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17" containerName="extract-utilities" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.454700 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17" containerName="extract-utilities" Oct 14 13:27:54 crc kubenswrapper[4725]: E1014 13:27:54.454711 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17" containerName="extract-content" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.454718 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17" containerName="extract-content" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.454839 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17" containerName="registry-server" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.455206 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5c787f6f6d-k5ls2" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.460081 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.466776 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.466935 4725 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-44qph" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.466941 4725 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.470768 4725 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.471965 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17-utilities\") pod \"e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17\" (UID: \"e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17\") " Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.472027 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17-catalog-content\") pod \"e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17\" (UID: \"e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17\") " Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.472085 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8jr2\" (UniqueName: \"kubernetes.io/projected/e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17-kube-api-access-q8jr2\") pod \"e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17\" (UID: \"e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17\") " Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.472379 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2sls\" (UniqueName: \"kubernetes.io/projected/9f68d3de-1952-4351-9955-742c297861c5-kube-api-access-f2sls\") pod \"metallb-operator-controller-manager-5c787f6f6d-k5ls2\" (UID: \"9f68d3de-1952-4351-9955-742c297861c5\") " pod="metallb-system/metallb-operator-controller-manager-5c787f6f6d-k5ls2" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.472415 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9f68d3de-1952-4351-9955-742c297861c5-webhook-cert\") pod \"metallb-operator-controller-manager-5c787f6f6d-k5ls2\" (UID: \"9f68d3de-1952-4351-9955-742c297861c5\") " pod="metallb-system/metallb-operator-controller-manager-5c787f6f6d-k5ls2" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.472435 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9f68d3de-1952-4351-9955-742c297861c5-apiservice-cert\") pod \"metallb-operator-controller-manager-5c787f6f6d-k5ls2\" (UID: \"9f68d3de-1952-4351-9955-742c297861c5\") " pod="metallb-system/metallb-operator-controller-manager-5c787f6f6d-k5ls2" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.473897 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17-utilities" (OuterVolumeSpecName: "utilities") pod "e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17" (UID: "e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.488877 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17-kube-api-access-q8jr2" (OuterVolumeSpecName: "kube-api-access-q8jr2") pod "e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17" (UID: "e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17"). InnerVolumeSpecName "kube-api-access-q8jr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.494288 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5c787f6f6d-k5ls2"] Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.571620 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17" (UID: "e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.578685 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2sls\" (UniqueName: \"kubernetes.io/projected/9f68d3de-1952-4351-9955-742c297861c5-kube-api-access-f2sls\") pod \"metallb-operator-controller-manager-5c787f6f6d-k5ls2\" (UID: \"9f68d3de-1952-4351-9955-742c297861c5\") " pod="metallb-system/metallb-operator-controller-manager-5c787f6f6d-k5ls2" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.578735 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9f68d3de-1952-4351-9955-742c297861c5-webhook-cert\") pod \"metallb-operator-controller-manager-5c787f6f6d-k5ls2\" (UID: \"9f68d3de-1952-4351-9955-742c297861c5\") " pod="metallb-system/metallb-operator-controller-manager-5c787f6f6d-k5ls2" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.578753 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9f68d3de-1952-4351-9955-742c297861c5-apiservice-cert\") pod \"metallb-operator-controller-manager-5c787f6f6d-k5ls2\" (UID: \"9f68d3de-1952-4351-9955-742c297861c5\") " pod="metallb-system/metallb-operator-controller-manager-5c787f6f6d-k5ls2" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.578820 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.578832 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.578843 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8jr2\" (UniqueName: \"kubernetes.io/projected/e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17-kube-api-access-q8jr2\") on node \"crc\" DevicePath \"\"" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.584810 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9f68d3de-1952-4351-9955-742c297861c5-apiservice-cert\") pod \"metallb-operator-controller-manager-5c787f6f6d-k5ls2\" (UID: \"9f68d3de-1952-4351-9955-742c297861c5\") " pod="metallb-system/metallb-operator-controller-manager-5c787f6f6d-k5ls2" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.586414 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9f68d3de-1952-4351-9955-742c297861c5-webhook-cert\") pod \"metallb-operator-controller-manager-5c787f6f6d-k5ls2\" (UID: \"9f68d3de-1952-4351-9955-742c297861c5\") " pod="metallb-system/metallb-operator-controller-manager-5c787f6f6d-k5ls2" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.607696 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2sls\" (UniqueName: \"kubernetes.io/projected/9f68d3de-1952-4351-9955-742c297861c5-kube-api-access-f2sls\") pod \"metallb-operator-controller-manager-5c787f6f6d-k5ls2\" (UID: \"9f68d3de-1952-4351-9955-742c297861c5\") " pod="metallb-system/metallb-operator-controller-manager-5c787f6f6d-k5ls2" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.793550 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5c787f6f6d-k5ls2" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.877120 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-95c4f899b-qzjhp"] Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.877818 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-95c4f899b-qzjhp" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.880348 4725 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.880525 4725 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.880633 4725 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-m6p4c" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.891254 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-95c4f899b-qzjhp"] Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.954083 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ssjz" event={"ID":"e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17","Type":"ContainerDied","Data":"fff28504f505ef4707ab07934fb9ae45cfcc0ea25f4575ae4a7a15dbb6918204"} Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.954347 4725 scope.go:117] "RemoveContainer" containerID="7a91e1b942349f1f8eb5bee84b2f9795ba13b56b78734b2bb430b91c15512714" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.954482 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ssjz" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.975712 4725 generic.go:334] "Generic (PLEG): container finished" podID="fde5463f-e7e3-4cba-b5c2-6822b774f682" containerID="a9f3da48d2f60765b9c0de1d6f6df434326413a1e982c1365a5a0b56269b1e36" exitCode=0 Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.975751 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nrrl8" event={"ID":"fde5463f-e7e3-4cba-b5c2-6822b774f682","Type":"ContainerDied","Data":"a9f3da48d2f60765b9c0de1d6f6df434326413a1e982c1365a5a0b56269b1e36"} Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.975777 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nrrl8" event={"ID":"fde5463f-e7e3-4cba-b5c2-6822b774f682","Type":"ContainerStarted","Data":"56e54acb51c1d135060b10267cf743772768cd93ef89689b59482d1616df3fbb"} Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.986249 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c72d8f5-e412-465e-9f73-597f96b57392-apiservice-cert\") pod \"metallb-operator-webhook-server-95c4f899b-qzjhp\" (UID: \"5c72d8f5-e412-465e-9f73-597f96b57392\") " pod="metallb-system/metallb-operator-webhook-server-95c4f899b-qzjhp" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.986327 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c72d8f5-e412-465e-9f73-597f96b57392-webhook-cert\") pod \"metallb-operator-webhook-server-95c4f899b-qzjhp\" (UID: \"5c72d8f5-e412-465e-9f73-597f96b57392\") " pod="metallb-system/metallb-operator-webhook-server-95c4f899b-qzjhp" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.986391 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcf4f\" (UniqueName: \"kubernetes.io/projected/5c72d8f5-e412-465e-9f73-597f96b57392-kube-api-access-jcf4f\") pod \"metallb-operator-webhook-server-95c4f899b-qzjhp\" (UID: \"5c72d8f5-e412-465e-9f73-597f96b57392\") " pod="metallb-system/metallb-operator-webhook-server-95c4f899b-qzjhp" Oct 14 13:27:54 crc kubenswrapper[4725]: I1014 13:27:54.993397 4725 scope.go:117] "RemoveContainer" containerID="52c4afe0a54d4abb10aef00a79654ac9ab6273ccd9bcc80b0bae52d75098cb2a" Oct 14 13:27:55 crc kubenswrapper[4725]: I1014 13:27:55.045121 4725 scope.go:117] "RemoveContainer" containerID="5676605aff24671c026fa2ab2104437623d928a148cd4ffb185b50232a22ca5d" Oct 14 13:27:55 crc kubenswrapper[4725]: I1014 13:27:55.088934 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c72d8f5-e412-465e-9f73-597f96b57392-apiservice-cert\") pod \"metallb-operator-webhook-server-95c4f899b-qzjhp\" (UID: \"5c72d8f5-e412-465e-9f73-597f96b57392\") " pod="metallb-system/metallb-operator-webhook-server-95c4f899b-qzjhp" Oct 14 13:27:55 crc kubenswrapper[4725]: I1014 13:27:55.088990 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c72d8f5-e412-465e-9f73-597f96b57392-webhook-cert\") pod \"metallb-operator-webhook-server-95c4f899b-qzjhp\" (UID: \"5c72d8f5-e412-465e-9f73-597f96b57392\") " pod="metallb-system/metallb-operator-webhook-server-95c4f899b-qzjhp" Oct 14 13:27:55 crc kubenswrapper[4725]: I1014 13:27:55.089044 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcf4f\" (UniqueName: \"kubernetes.io/projected/5c72d8f5-e412-465e-9f73-597f96b57392-kube-api-access-jcf4f\") pod \"metallb-operator-webhook-server-95c4f899b-qzjhp\" (UID: \"5c72d8f5-e412-465e-9f73-597f96b57392\") " pod="metallb-system/metallb-operator-webhook-server-95c4f899b-qzjhp" Oct 14 13:27:55 crc kubenswrapper[4725]: I1014 13:27:55.097001 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6ssjz"] Oct 14 13:27:55 crc kubenswrapper[4725]: I1014 13:27:55.097050 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6ssjz"] Oct 14 13:27:55 crc kubenswrapper[4725]: I1014 13:27:55.104840 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c72d8f5-e412-465e-9f73-597f96b57392-webhook-cert\") pod \"metallb-operator-webhook-server-95c4f899b-qzjhp\" (UID: \"5c72d8f5-e412-465e-9f73-597f96b57392\") " pod="metallb-system/metallb-operator-webhook-server-95c4f899b-qzjhp" Oct 14 13:27:55 crc kubenswrapper[4725]: I1014 13:27:55.116050 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c72d8f5-e412-465e-9f73-597f96b57392-apiservice-cert\") pod \"metallb-operator-webhook-server-95c4f899b-qzjhp\" (UID: \"5c72d8f5-e412-465e-9f73-597f96b57392\") " pod="metallb-system/metallb-operator-webhook-server-95c4f899b-qzjhp" Oct 14 13:27:55 crc kubenswrapper[4725]: I1014 13:27:55.148280 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcf4f\" (UniqueName: \"kubernetes.io/projected/5c72d8f5-e412-465e-9f73-597f96b57392-kube-api-access-jcf4f\") pod \"metallb-operator-webhook-server-95c4f899b-qzjhp\" (UID: \"5c72d8f5-e412-465e-9f73-597f96b57392\") " pod="metallb-system/metallb-operator-webhook-server-95c4f899b-qzjhp" Oct 14 13:27:55 crc kubenswrapper[4725]: I1014 13:27:55.207746 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-95c4f899b-qzjhp" Oct 14 13:27:55 crc kubenswrapper[4725]: I1014 13:27:55.393010 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5c787f6f6d-k5ls2"] Oct 14 13:27:55 crc kubenswrapper[4725]: I1014 13:27:55.660761 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-95c4f899b-qzjhp"] Oct 14 13:27:55 crc kubenswrapper[4725]: W1014 13:27:55.668443 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c72d8f5_e412_465e_9f73_597f96b57392.slice/crio-dfe360c6f833e7f71899dbe48ff0bd309270f446006096a8c94646c20a683fd2 WatchSource:0}: Error finding container dfe360c6f833e7f71899dbe48ff0bd309270f446006096a8c94646c20a683fd2: Status 404 returned error can't find the container with id dfe360c6f833e7f71899dbe48ff0bd309270f446006096a8c94646c20a683fd2 Oct 14 13:27:55 crc kubenswrapper[4725]: I1014 13:27:55.927501 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17" path="/var/lib/kubelet/pods/e3eac4d4-a5db-48de-9fb0-f9a5b9f74e17/volumes" Oct 14 13:27:55 crc kubenswrapper[4725]: I1014 13:27:55.982431 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5c787f6f6d-k5ls2" event={"ID":"9f68d3de-1952-4351-9955-742c297861c5","Type":"ContainerStarted","Data":"d34ae5c21c5e52f7da076387aae9f7ed6971ab948c05f73e3fb8d70c6f608446"} Oct 14 13:27:55 crc kubenswrapper[4725]: I1014 13:27:55.986291 4725 generic.go:334] "Generic (PLEG): container finished" podID="fde5463f-e7e3-4cba-b5c2-6822b774f682" containerID="23a4deaacabe75304d0f420e61952778469ad9a6e0f4166ef56b2285f6dfdb40" exitCode=0 Oct 14 13:27:55 crc kubenswrapper[4725]: I1014 13:27:55.986385 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nrrl8" event={"ID":"fde5463f-e7e3-4cba-b5c2-6822b774f682","Type":"ContainerDied","Data":"23a4deaacabe75304d0f420e61952778469ad9a6e0f4166ef56b2285f6dfdb40"} Oct 14 13:27:55 crc kubenswrapper[4725]: I1014 13:27:55.991867 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-95c4f899b-qzjhp" event={"ID":"5c72d8f5-e412-465e-9f73-597f96b57392","Type":"ContainerStarted","Data":"dfe360c6f833e7f71899dbe48ff0bd309270f446006096a8c94646c20a683fd2"} Oct 14 13:27:57 crc kubenswrapper[4725]: I1014 13:27:56.999965 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nrrl8" event={"ID":"fde5463f-e7e3-4cba-b5c2-6822b774f682","Type":"ContainerStarted","Data":"0b37b6168fb917c7b16b6ea9d51d16b4fbbac2e74416b344cef144779ef013c7"} Oct 14 13:27:57 crc kubenswrapper[4725]: I1014 13:27:57.024886 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nrrl8" podStartSLOduration=2.486045176 podStartE2EDuration="4.024870678s" podCreationTimestamp="2025-10-14 13:27:53 +0000 UTC" firstStartedPulling="2025-10-14 13:27:54.979396127 +0000 UTC m=+791.827830936" lastFinishedPulling="2025-10-14 13:27:56.518221629 +0000 UTC m=+793.366656438" observedRunningTime="2025-10-14 13:27:57.022876693 +0000 UTC m=+793.871311502" watchObservedRunningTime="2025-10-14 13:27:57.024870678 +0000 UTC m=+793.873305487" Oct 14 13:28:01 crc kubenswrapper[4725]: I1014 13:28:01.021619 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5c787f6f6d-k5ls2" event={"ID":"9f68d3de-1952-4351-9955-742c297861c5","Type":"ContainerStarted","Data":"52727db5c3b1deecc7b0816406953631f3e7b22a4a886c4e6f72f97f2985abde"} Oct 14 13:28:01 crc kubenswrapper[4725]: I1014 13:28:01.022492 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5c787f6f6d-k5ls2" Oct 14 13:28:01 crc kubenswrapper[4725]: I1014 13:28:01.023926 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-95c4f899b-qzjhp" event={"ID":"5c72d8f5-e412-465e-9f73-597f96b57392","Type":"ContainerStarted","Data":"bed95dfb06a20066c190dc37b2b0cda97c3e1987d12f9ee6c72bcfd1b4a47d8b"} Oct 14 13:28:01 crc kubenswrapper[4725]: I1014 13:28:01.024270 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-95c4f899b-qzjhp" Oct 14 13:28:01 crc kubenswrapper[4725]: I1014 13:28:01.051540 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5c787f6f6d-k5ls2" podStartSLOduration=2.392328001 podStartE2EDuration="7.051500493s" podCreationTimestamp="2025-10-14 13:27:54 +0000 UTC" firstStartedPulling="2025-10-14 13:27:55.406423911 +0000 UTC m=+792.254858720" lastFinishedPulling="2025-10-14 13:28:00.065596403 +0000 UTC m=+796.914031212" observedRunningTime="2025-10-14 13:28:01.044336258 +0000 UTC m=+797.892771127" watchObservedRunningTime="2025-10-14 13:28:01.051500493 +0000 UTC m=+797.899935372" Oct 14 13:28:01 crc kubenswrapper[4725]: I1014 13:28:01.074657 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-95c4f899b-qzjhp" podStartSLOduration=2.661185954 podStartE2EDuration="7.074629644s" podCreationTimestamp="2025-10-14 13:27:54 +0000 UTC" firstStartedPulling="2025-10-14 13:27:55.671140642 +0000 UTC m=+792.519575451" lastFinishedPulling="2025-10-14 13:28:00.084584332 +0000 UTC m=+796.933019141" observedRunningTime="2025-10-14 13:28:01.069850164 +0000 UTC m=+797.918285013" watchObservedRunningTime="2025-10-14 13:28:01.074629644 +0000 UTC m=+797.923064453" Oct 14 13:28:02 crc kubenswrapper[4725]: I1014 13:28:02.520901 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 13:28:02 crc kubenswrapper[4725]: I1014 13:28:02.520983 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 13:28:02 crc kubenswrapper[4725]: I1014 13:28:02.521056 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" Oct 14 13:28:02 crc kubenswrapper[4725]: I1014 13:28:02.521901 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8af3cd49f53b953cdb857f98f2a4e4ef4b83977a9a2d06c5f02fcc6cd95add47"} pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 13:28:02 crc kubenswrapper[4725]: I1014 13:28:02.521989 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" containerID="cri-o://8af3cd49f53b953cdb857f98f2a4e4ef4b83977a9a2d06c5f02fcc6cd95add47" gracePeriod=600 Oct 14 13:28:03 crc kubenswrapper[4725]: I1014 13:28:03.037890 4725 generic.go:334] "Generic (PLEG): container finished" podID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerID="8af3cd49f53b953cdb857f98f2a4e4ef4b83977a9a2d06c5f02fcc6cd95add47" exitCode=0 Oct 14 13:28:03 crc kubenswrapper[4725]: I1014 13:28:03.037958 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" event={"ID":"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c","Type":"ContainerDied","Data":"8af3cd49f53b953cdb857f98f2a4e4ef4b83977a9a2d06c5f02fcc6cd95add47"} Oct 14 13:28:03 crc kubenswrapper[4725]: I1014 13:28:03.038525 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" event={"ID":"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c","Type":"ContainerStarted","Data":"aafea91841c00d0b809adb7fadf158b888ea4f36dadb61e7933d5af2c309820b"} Oct 14 13:28:03 crc kubenswrapper[4725]: I1014 13:28:03.038555 4725 scope.go:117] "RemoveContainer" containerID="901725d1f013ca80344ace1de79ac7ae086e040091353bbd5f5c2d54a02abb43" Oct 14 13:28:03 crc kubenswrapper[4725]: I1014 13:28:03.849648 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nrrl8" Oct 14 13:28:03 crc kubenswrapper[4725]: I1014 13:28:03.849778 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nrrl8" Oct 14 13:28:03 crc kubenswrapper[4725]: I1014 13:28:03.886793 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nrrl8" Oct 14 13:28:04 crc kubenswrapper[4725]: I1014 13:28:04.095011 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nrrl8" Oct 14 13:28:04 crc kubenswrapper[4725]: I1014 13:28:04.140953 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nrrl8"] Oct 14 13:28:06 crc kubenswrapper[4725]: I1014 13:28:06.059498 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nrrl8" podUID="fde5463f-e7e3-4cba-b5c2-6822b774f682" containerName="registry-server" containerID="cri-o://0b37b6168fb917c7b16b6ea9d51d16b4fbbac2e74416b344cef144779ef013c7" gracePeriod=2 Oct 14 13:28:06 crc kubenswrapper[4725]: I1014 13:28:06.617508 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nrrl8" Oct 14 13:28:06 crc kubenswrapper[4725]: I1014 13:28:06.761388 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fde5463f-e7e3-4cba-b5c2-6822b774f682-catalog-content\") pod \"fde5463f-e7e3-4cba-b5c2-6822b774f682\" (UID: \"fde5463f-e7e3-4cba-b5c2-6822b774f682\") " Oct 14 13:28:06 crc kubenswrapper[4725]: I1014 13:28:06.761699 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fde5463f-e7e3-4cba-b5c2-6822b774f682-utilities\") pod \"fde5463f-e7e3-4cba-b5c2-6822b774f682\" (UID: \"fde5463f-e7e3-4cba-b5c2-6822b774f682\") " Oct 14 13:28:06 crc kubenswrapper[4725]: I1014 13:28:06.761790 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdr6w\" (UniqueName: \"kubernetes.io/projected/fde5463f-e7e3-4cba-b5c2-6822b774f682-kube-api-access-gdr6w\") pod \"fde5463f-e7e3-4cba-b5c2-6822b774f682\" (UID: \"fde5463f-e7e3-4cba-b5c2-6822b774f682\") " Oct 14 13:28:06 crc kubenswrapper[4725]: I1014 13:28:06.766515 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fde5463f-e7e3-4cba-b5c2-6822b774f682-kube-api-access-gdr6w" (OuterVolumeSpecName: "kube-api-access-gdr6w") pod "fde5463f-e7e3-4cba-b5c2-6822b774f682" (UID: "fde5463f-e7e3-4cba-b5c2-6822b774f682"). InnerVolumeSpecName "kube-api-access-gdr6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:28:06 crc kubenswrapper[4725]: I1014 13:28:06.766553 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fde5463f-e7e3-4cba-b5c2-6822b774f682-utilities" (OuterVolumeSpecName: "utilities") pod "fde5463f-e7e3-4cba-b5c2-6822b774f682" (UID: "fde5463f-e7e3-4cba-b5c2-6822b774f682"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:28:06 crc kubenswrapper[4725]: I1014 13:28:06.775000 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fde5463f-e7e3-4cba-b5c2-6822b774f682-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fde5463f-e7e3-4cba-b5c2-6822b774f682" (UID: "fde5463f-e7e3-4cba-b5c2-6822b774f682"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:28:06 crc kubenswrapper[4725]: I1014 13:28:06.862622 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdr6w\" (UniqueName: \"kubernetes.io/projected/fde5463f-e7e3-4cba-b5c2-6822b774f682-kube-api-access-gdr6w\") on node \"crc\" DevicePath \"\"" Oct 14 13:28:06 crc kubenswrapper[4725]: I1014 13:28:06.862656 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fde5463f-e7e3-4cba-b5c2-6822b774f682-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 13:28:06 crc kubenswrapper[4725]: I1014 13:28:06.862666 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fde5463f-e7e3-4cba-b5c2-6822b774f682-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 13:28:07 crc kubenswrapper[4725]: I1014 13:28:07.065487 4725 generic.go:334] "Generic (PLEG): container finished" podID="fde5463f-e7e3-4cba-b5c2-6822b774f682" containerID="0b37b6168fb917c7b16b6ea9d51d16b4fbbac2e74416b344cef144779ef013c7" exitCode=0 Oct 14 13:28:07 crc kubenswrapper[4725]: I1014 13:28:07.065524 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nrrl8" event={"ID":"fde5463f-e7e3-4cba-b5c2-6822b774f682","Type":"ContainerDied","Data":"0b37b6168fb917c7b16b6ea9d51d16b4fbbac2e74416b344cef144779ef013c7"} Oct 14 13:28:07 crc kubenswrapper[4725]: I1014 13:28:07.065548 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nrrl8" event={"ID":"fde5463f-e7e3-4cba-b5c2-6822b774f682","Type":"ContainerDied","Data":"56e54acb51c1d135060b10267cf743772768cd93ef89689b59482d1616df3fbb"} Oct 14 13:28:07 crc kubenswrapper[4725]: I1014 13:28:07.065562 4725 scope.go:117] "RemoveContainer" containerID="0b37b6168fb917c7b16b6ea9d51d16b4fbbac2e74416b344cef144779ef013c7" Oct 14 13:28:07 crc kubenswrapper[4725]: I1014 13:28:07.065651 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nrrl8" Oct 14 13:28:07 crc kubenswrapper[4725]: I1014 13:28:07.088202 4725 scope.go:117] "RemoveContainer" containerID="23a4deaacabe75304d0f420e61952778469ad9a6e0f4166ef56b2285f6dfdb40" Oct 14 13:28:07 crc kubenswrapper[4725]: I1014 13:28:07.094248 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nrrl8"] Oct 14 13:28:07 crc kubenswrapper[4725]: I1014 13:28:07.098327 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nrrl8"] Oct 14 13:28:07 crc kubenswrapper[4725]: I1014 13:28:07.118839 4725 scope.go:117] "RemoveContainer" containerID="a9f3da48d2f60765b9c0de1d6f6df434326413a1e982c1365a5a0b56269b1e36" Oct 14 13:28:07 crc kubenswrapper[4725]: I1014 13:28:07.132177 4725 scope.go:117] "RemoveContainer" containerID="0b37b6168fb917c7b16b6ea9d51d16b4fbbac2e74416b344cef144779ef013c7" Oct 14 13:28:07 crc kubenswrapper[4725]: E1014 13:28:07.132737 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b37b6168fb917c7b16b6ea9d51d16b4fbbac2e74416b344cef144779ef013c7\": container with ID starting with 0b37b6168fb917c7b16b6ea9d51d16b4fbbac2e74416b344cef144779ef013c7 not found: ID does not exist" containerID="0b37b6168fb917c7b16b6ea9d51d16b4fbbac2e74416b344cef144779ef013c7" Oct 14 13:28:07 crc kubenswrapper[4725]: I1014 13:28:07.132800 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b37b6168fb917c7b16b6ea9d51d16b4fbbac2e74416b344cef144779ef013c7"} err="failed to get container status \"0b37b6168fb917c7b16b6ea9d51d16b4fbbac2e74416b344cef144779ef013c7\": rpc error: code = NotFound desc = could not find container \"0b37b6168fb917c7b16b6ea9d51d16b4fbbac2e74416b344cef144779ef013c7\": container with ID starting with 0b37b6168fb917c7b16b6ea9d51d16b4fbbac2e74416b344cef144779ef013c7 not found: ID does not exist" Oct 14 13:28:07 crc kubenswrapper[4725]: I1014 13:28:07.132849 4725 scope.go:117] "RemoveContainer" containerID="23a4deaacabe75304d0f420e61952778469ad9a6e0f4166ef56b2285f6dfdb40" Oct 14 13:28:07 crc kubenswrapper[4725]: E1014 13:28:07.133533 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23a4deaacabe75304d0f420e61952778469ad9a6e0f4166ef56b2285f6dfdb40\": container with ID starting with 23a4deaacabe75304d0f420e61952778469ad9a6e0f4166ef56b2285f6dfdb40 not found: ID does not exist" containerID="23a4deaacabe75304d0f420e61952778469ad9a6e0f4166ef56b2285f6dfdb40" Oct 14 13:28:07 crc kubenswrapper[4725]: I1014 13:28:07.133565 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23a4deaacabe75304d0f420e61952778469ad9a6e0f4166ef56b2285f6dfdb40"} err="failed to get container status \"23a4deaacabe75304d0f420e61952778469ad9a6e0f4166ef56b2285f6dfdb40\": rpc error: code = NotFound desc = could not find container \"23a4deaacabe75304d0f420e61952778469ad9a6e0f4166ef56b2285f6dfdb40\": container with ID starting with 23a4deaacabe75304d0f420e61952778469ad9a6e0f4166ef56b2285f6dfdb40 not found: ID does not exist" Oct 14 13:28:07 crc kubenswrapper[4725]: I1014 13:28:07.133588 4725 scope.go:117] "RemoveContainer" containerID="a9f3da48d2f60765b9c0de1d6f6df434326413a1e982c1365a5a0b56269b1e36" Oct 14 13:28:07 crc kubenswrapper[4725]: E1014 13:28:07.133872 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9f3da48d2f60765b9c0de1d6f6df434326413a1e982c1365a5a0b56269b1e36\": container with ID starting with a9f3da48d2f60765b9c0de1d6f6df434326413a1e982c1365a5a0b56269b1e36 not found: ID does not exist" containerID="a9f3da48d2f60765b9c0de1d6f6df434326413a1e982c1365a5a0b56269b1e36" Oct 14 13:28:07 crc kubenswrapper[4725]: I1014 13:28:07.133912 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9f3da48d2f60765b9c0de1d6f6df434326413a1e982c1365a5a0b56269b1e36"} err="failed to get container status \"a9f3da48d2f60765b9c0de1d6f6df434326413a1e982c1365a5a0b56269b1e36\": rpc error: code = NotFound desc = could not find container \"a9f3da48d2f60765b9c0de1d6f6df434326413a1e982c1365a5a0b56269b1e36\": container with ID starting with a9f3da48d2f60765b9c0de1d6f6df434326413a1e982c1365a5a0b56269b1e36 not found: ID does not exist" Oct 14 13:28:07 crc kubenswrapper[4725]: I1014 13:28:07.927564 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fde5463f-e7e3-4cba-b5c2-6822b774f682" path="/var/lib/kubelet/pods/fde5463f-e7e3-4cba-b5c2-6822b774f682/volumes" Oct 14 13:28:15 crc kubenswrapper[4725]: I1014 13:28:15.213888 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-95c4f899b-qzjhp" Oct 14 13:28:26 crc kubenswrapper[4725]: I1014 13:28:26.326620 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cjj4t"] Oct 14 13:28:26 crc kubenswrapper[4725]: E1014 13:28:26.327599 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fde5463f-e7e3-4cba-b5c2-6822b774f682" containerName="registry-server" Oct 14 13:28:26 crc kubenswrapper[4725]: I1014 13:28:26.327625 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="fde5463f-e7e3-4cba-b5c2-6822b774f682" containerName="registry-server" Oct 14 13:28:26 crc kubenswrapper[4725]: E1014 13:28:26.327652 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fde5463f-e7e3-4cba-b5c2-6822b774f682" containerName="extract-content" Oct 14 13:28:26 crc kubenswrapper[4725]: I1014 13:28:26.327669 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="fde5463f-e7e3-4cba-b5c2-6822b774f682" containerName="extract-content" Oct 14 13:28:26 crc kubenswrapper[4725]: E1014 13:28:26.327695 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fde5463f-e7e3-4cba-b5c2-6822b774f682" containerName="extract-utilities" Oct 14 13:28:26 crc kubenswrapper[4725]: I1014 13:28:26.327711 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="fde5463f-e7e3-4cba-b5c2-6822b774f682" containerName="extract-utilities" Oct 14 13:28:26 crc kubenswrapper[4725]: I1014 13:28:26.327964 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="fde5463f-e7e3-4cba-b5c2-6822b774f682" containerName="registry-server" Oct 14 13:28:26 crc kubenswrapper[4725]: I1014 13:28:26.329383 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cjj4t" Oct 14 13:28:26 crc kubenswrapper[4725]: I1014 13:28:26.343381 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cjj4t"] Oct 14 13:28:26 crc kubenswrapper[4725]: I1014 13:28:26.470237 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b76e82-b82a-4e08-9d33-efb2d203fe56-catalog-content\") pod \"community-operators-cjj4t\" (UID: \"b6b76e82-b82a-4e08-9d33-efb2d203fe56\") " pod="openshift-marketplace/community-operators-cjj4t" Oct 14 13:28:26 crc kubenswrapper[4725]: I1014 13:28:26.470309 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b76e82-b82a-4e08-9d33-efb2d203fe56-utilities\") pod \"community-operators-cjj4t\" (UID: \"b6b76e82-b82a-4e08-9d33-efb2d203fe56\") " pod="openshift-marketplace/community-operators-cjj4t" Oct 14 13:28:26 crc kubenswrapper[4725]: I1014 13:28:26.470339 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fv5k\" (UniqueName: \"kubernetes.io/projected/b6b76e82-b82a-4e08-9d33-efb2d203fe56-kube-api-access-5fv5k\") pod \"community-operators-cjj4t\" (UID: \"b6b76e82-b82a-4e08-9d33-efb2d203fe56\") " pod="openshift-marketplace/community-operators-cjj4t" Oct 14 13:28:26 crc kubenswrapper[4725]: I1014 13:28:26.572028 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b76e82-b82a-4e08-9d33-efb2d203fe56-catalog-content\") pod \"community-operators-cjj4t\" (UID: \"b6b76e82-b82a-4e08-9d33-efb2d203fe56\") " pod="openshift-marketplace/community-operators-cjj4t" Oct 14 13:28:26 crc kubenswrapper[4725]: I1014 13:28:26.572098 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b76e82-b82a-4e08-9d33-efb2d203fe56-utilities\") pod \"community-operators-cjj4t\" (UID: \"b6b76e82-b82a-4e08-9d33-efb2d203fe56\") " pod="openshift-marketplace/community-operators-cjj4t" Oct 14 13:28:26 crc kubenswrapper[4725]: I1014 13:28:26.572134 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fv5k\" (UniqueName: \"kubernetes.io/projected/b6b76e82-b82a-4e08-9d33-efb2d203fe56-kube-api-access-5fv5k\") pod \"community-operators-cjj4t\" (UID: \"b6b76e82-b82a-4e08-9d33-efb2d203fe56\") " pod="openshift-marketplace/community-operators-cjj4t" Oct 14 13:28:26 crc kubenswrapper[4725]: I1014 13:28:26.572571 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b76e82-b82a-4e08-9d33-efb2d203fe56-catalog-content\") pod \"community-operators-cjj4t\" (UID: \"b6b76e82-b82a-4e08-9d33-efb2d203fe56\") " pod="openshift-marketplace/community-operators-cjj4t" Oct 14 13:28:26 crc kubenswrapper[4725]: I1014 13:28:26.572643 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b76e82-b82a-4e08-9d33-efb2d203fe56-utilities\") pod \"community-operators-cjj4t\" (UID: \"b6b76e82-b82a-4e08-9d33-efb2d203fe56\") " pod="openshift-marketplace/community-operators-cjj4t" Oct 14 13:28:26 crc kubenswrapper[4725]: I1014 13:28:26.590373 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fv5k\" (UniqueName: \"kubernetes.io/projected/b6b76e82-b82a-4e08-9d33-efb2d203fe56-kube-api-access-5fv5k\") pod \"community-operators-cjj4t\" (UID: \"b6b76e82-b82a-4e08-9d33-efb2d203fe56\") " pod="openshift-marketplace/community-operators-cjj4t" Oct 14 13:28:26 crc kubenswrapper[4725]: I1014 13:28:26.658678 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cjj4t" Oct 14 13:28:27 crc kubenswrapper[4725]: I1014 13:28:27.160273 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cjj4t"] Oct 14 13:28:27 crc kubenswrapper[4725]: I1014 13:28:27.187684 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjj4t" event={"ID":"b6b76e82-b82a-4e08-9d33-efb2d203fe56","Type":"ContainerStarted","Data":"c37bd952314c2748920c80822e869e51ae36bf94dcf9df0b6d6e32a21967651b"} Oct 14 13:28:28 crc kubenswrapper[4725]: I1014 13:28:28.199292 4725 generic.go:334] "Generic (PLEG): container finished" podID="b6b76e82-b82a-4e08-9d33-efb2d203fe56" containerID="9695a46280f096ad78bc76987bd103313714b5b7fb242667eba51b3f3829416a" exitCode=0 Oct 14 13:28:28 crc kubenswrapper[4725]: I1014 13:28:28.199333 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjj4t" event={"ID":"b6b76e82-b82a-4e08-9d33-efb2d203fe56","Type":"ContainerDied","Data":"9695a46280f096ad78bc76987bd103313714b5b7fb242667eba51b3f3829416a"} Oct 14 13:28:30 crc kubenswrapper[4725]: I1014 13:28:30.216886 4725 generic.go:334] "Generic (PLEG): container finished" podID="b6b76e82-b82a-4e08-9d33-efb2d203fe56" containerID="4c51ef184cf43ba1ebb3a421d74c6a4d6f55df26d263d489e0f978e3fa4d6f81" exitCode=0 Oct 14 13:28:30 crc kubenswrapper[4725]: I1014 13:28:30.216939 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjj4t" event={"ID":"b6b76e82-b82a-4e08-9d33-efb2d203fe56","Type":"ContainerDied","Data":"4c51ef184cf43ba1ebb3a421d74c6a4d6f55df26d263d489e0f978e3fa4d6f81"} Oct 14 13:28:31 crc kubenswrapper[4725]: I1014 13:28:31.229251 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjj4t" event={"ID":"b6b76e82-b82a-4e08-9d33-efb2d203fe56","Type":"ContainerStarted","Data":"786c3e51479c2fdfe964a5b04a691db0ad7d9b0a3b84610969f53d1ee51003ce"} Oct 14 13:28:31 crc kubenswrapper[4725]: I1014 13:28:31.259566 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cjj4t" podStartSLOduration=2.84842789 podStartE2EDuration="5.259544499s" podCreationTimestamp="2025-10-14 13:28:26 +0000 UTC" firstStartedPulling="2025-10-14 13:28:28.201932162 +0000 UTC m=+825.050367001" lastFinishedPulling="2025-10-14 13:28:30.613048801 +0000 UTC m=+827.461483610" observedRunningTime="2025-10-14 13:28:31.257113883 +0000 UTC m=+828.105548712" watchObservedRunningTime="2025-10-14 13:28:31.259544499 +0000 UTC m=+828.107979318" Oct 14 13:28:34 crc kubenswrapper[4725]: I1014 13:28:34.796703 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5c787f6f6d-k5ls2" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.620343 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-pmr8r"] Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.633597 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-pmr8r" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.637706 4725 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-n5n2g" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.641052 4725 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.641287 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.643298 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-7xnfb"] Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.644067 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-7xnfb" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.649134 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-7xnfb"] Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.651668 4725 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.711102 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0aaf5c0e-3673-4bfa-a046-feed6a0121d7-reloader\") pod \"frr-k8s-pmr8r\" (UID: \"0aaf5c0e-3673-4bfa-a046-feed6a0121d7\") " pod="metallb-system/frr-k8s-pmr8r" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.711153 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0aaf5c0e-3673-4bfa-a046-feed6a0121d7-metrics-certs\") pod \"frr-k8s-pmr8r\" (UID: \"0aaf5c0e-3673-4bfa-a046-feed6a0121d7\") " pod="metallb-system/frr-k8s-pmr8r" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.711171 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0aaf5c0e-3673-4bfa-a046-feed6a0121d7-frr-startup\") pod \"frr-k8s-pmr8r\" (UID: \"0aaf5c0e-3673-4bfa-a046-feed6a0121d7\") " pod="metallb-system/frr-k8s-pmr8r" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.711188 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0aaf5c0e-3673-4bfa-a046-feed6a0121d7-metrics\") pod \"frr-k8s-pmr8r\" (UID: \"0aaf5c0e-3673-4bfa-a046-feed6a0121d7\") " pod="metallb-system/frr-k8s-pmr8r" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.711206 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0aaf5c0e-3673-4bfa-a046-feed6a0121d7-frr-conf\") pod \"frr-k8s-pmr8r\" (UID: \"0aaf5c0e-3673-4bfa-a046-feed6a0121d7\") " pod="metallb-system/frr-k8s-pmr8r" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.711200 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-2dxqc"] Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.711464 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfkwm\" (UniqueName: \"kubernetes.io/projected/0aaf5c0e-3673-4bfa-a046-feed6a0121d7-kube-api-access-tfkwm\") pod \"frr-k8s-pmr8r\" (UID: \"0aaf5c0e-3673-4bfa-a046-feed6a0121d7\") " pod="metallb-system/frr-k8s-pmr8r" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.711530 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0aaf5c0e-3673-4bfa-a046-feed6a0121d7-frr-sockets\") pod \"frr-k8s-pmr8r\" (UID: \"0aaf5c0e-3673-4bfa-a046-feed6a0121d7\") " pod="metallb-system/frr-k8s-pmr8r" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.712251 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2dxqc" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.719678 4725 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.720058 4725 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-ldpfs" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.720242 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.726881 4725 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.732305 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-68d546b9d8-cs2cd"] Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.733411 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-cs2cd" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.735488 4725 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.756825 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-cs2cd"] Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.813079 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jhgw\" (UniqueName: \"kubernetes.io/projected/9c551c0f-3df3-4ba6-8bbb-4d996cad9d45-kube-api-access-6jhgw\") pod \"frr-k8s-webhook-server-64bf5d555-7xnfb\" (UID: \"9c551c0f-3df3-4ba6-8bbb-4d996cad9d45\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-7xnfb" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.813128 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0aaf5c0e-3673-4bfa-a046-feed6a0121d7-reloader\") pod \"frr-k8s-pmr8r\" (UID: \"0aaf5c0e-3673-4bfa-a046-feed6a0121d7\") " pod="metallb-system/frr-k8s-pmr8r" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.813146 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8f68d749-82ff-45ee-b658-2324015012f7-metallb-excludel2\") pod \"speaker-2dxqc\" (UID: \"8f68d749-82ff-45ee-b658-2324015012f7\") " pod="metallb-system/speaker-2dxqc" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.813168 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0aaf5c0e-3673-4bfa-a046-feed6a0121d7-metrics-certs\") pod \"frr-k8s-pmr8r\" (UID: \"0aaf5c0e-3673-4bfa-a046-feed6a0121d7\") " pod="metallb-system/frr-k8s-pmr8r" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.813188 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0aaf5c0e-3673-4bfa-a046-feed6a0121d7-frr-startup\") pod \"frr-k8s-pmr8r\" (UID: \"0aaf5c0e-3673-4bfa-a046-feed6a0121d7\") " pod="metallb-system/frr-k8s-pmr8r" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.813204 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0aaf5c0e-3673-4bfa-a046-feed6a0121d7-metrics\") pod \"frr-k8s-pmr8r\" (UID: \"0aaf5c0e-3673-4bfa-a046-feed6a0121d7\") " pod="metallb-system/frr-k8s-pmr8r" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.813224 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4jf5\" (UniqueName: \"kubernetes.io/projected/8f68d749-82ff-45ee-b658-2324015012f7-kube-api-access-c4jf5\") pod \"speaker-2dxqc\" (UID: \"8f68d749-82ff-45ee-b658-2324015012f7\") " pod="metallb-system/speaker-2dxqc" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.813240 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0aaf5c0e-3673-4bfa-a046-feed6a0121d7-frr-conf\") pod \"frr-k8s-pmr8r\" (UID: \"0aaf5c0e-3673-4bfa-a046-feed6a0121d7\") " pod="metallb-system/frr-k8s-pmr8r" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.813264 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9c551c0f-3df3-4ba6-8bbb-4d996cad9d45-cert\") pod \"frr-k8s-webhook-server-64bf5d555-7xnfb\" (UID: \"9c551c0f-3df3-4ba6-8bbb-4d996cad9d45\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-7xnfb" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.813285 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8f68d749-82ff-45ee-b658-2324015012f7-memberlist\") pod \"speaker-2dxqc\" (UID: \"8f68d749-82ff-45ee-b658-2324015012f7\") " pod="metallb-system/speaker-2dxqc" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.813306 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfkwm\" (UniqueName: \"kubernetes.io/projected/0aaf5c0e-3673-4bfa-a046-feed6a0121d7-kube-api-access-tfkwm\") pod \"frr-k8s-pmr8r\" (UID: \"0aaf5c0e-3673-4bfa-a046-feed6a0121d7\") " pod="metallb-system/frr-k8s-pmr8r" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.813322 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwb5g\" (UniqueName: \"kubernetes.io/projected/bdee3f99-134e-4020-b9a6-fdc4c66081eb-kube-api-access-wwb5g\") pod \"controller-68d546b9d8-cs2cd\" (UID: \"bdee3f99-134e-4020-b9a6-fdc4c66081eb\") " pod="metallb-system/controller-68d546b9d8-cs2cd" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.813340 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0aaf5c0e-3673-4bfa-a046-feed6a0121d7-frr-sockets\") pod \"frr-k8s-pmr8r\" (UID: \"0aaf5c0e-3673-4bfa-a046-feed6a0121d7\") " pod="metallb-system/frr-k8s-pmr8r" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.813357 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bdee3f99-134e-4020-b9a6-fdc4c66081eb-metrics-certs\") pod \"controller-68d546b9d8-cs2cd\" (UID: \"bdee3f99-134e-4020-b9a6-fdc4c66081eb\") " pod="metallb-system/controller-68d546b9d8-cs2cd" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.813386 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8f68d749-82ff-45ee-b658-2324015012f7-metrics-certs\") pod \"speaker-2dxqc\" (UID: \"8f68d749-82ff-45ee-b658-2324015012f7\") " pod="metallb-system/speaker-2dxqc" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.813421 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bdee3f99-134e-4020-b9a6-fdc4c66081eb-cert\") pod \"controller-68d546b9d8-cs2cd\" (UID: \"bdee3f99-134e-4020-b9a6-fdc4c66081eb\") " pod="metallb-system/controller-68d546b9d8-cs2cd" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.813821 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0aaf5c0e-3673-4bfa-a046-feed6a0121d7-reloader\") pod \"frr-k8s-pmr8r\" (UID: \"0aaf5c0e-3673-4bfa-a046-feed6a0121d7\") " pod="metallb-system/frr-k8s-pmr8r" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.814061 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0aaf5c0e-3673-4bfa-a046-feed6a0121d7-frr-conf\") pod \"frr-k8s-pmr8r\" (UID: \"0aaf5c0e-3673-4bfa-a046-feed6a0121d7\") " pod="metallb-system/frr-k8s-pmr8r" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.814140 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0aaf5c0e-3673-4bfa-a046-feed6a0121d7-frr-sockets\") pod \"frr-k8s-pmr8r\" (UID: \"0aaf5c0e-3673-4bfa-a046-feed6a0121d7\") " pod="metallb-system/frr-k8s-pmr8r" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.814283 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0aaf5c0e-3673-4bfa-a046-feed6a0121d7-metrics\") pod \"frr-k8s-pmr8r\" (UID: \"0aaf5c0e-3673-4bfa-a046-feed6a0121d7\") " pod="metallb-system/frr-k8s-pmr8r" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.814824 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0aaf5c0e-3673-4bfa-a046-feed6a0121d7-frr-startup\") pod \"frr-k8s-pmr8r\" (UID: \"0aaf5c0e-3673-4bfa-a046-feed6a0121d7\") " pod="metallb-system/frr-k8s-pmr8r" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.825593 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0aaf5c0e-3673-4bfa-a046-feed6a0121d7-metrics-certs\") pod \"frr-k8s-pmr8r\" (UID: \"0aaf5c0e-3673-4bfa-a046-feed6a0121d7\") " pod="metallb-system/frr-k8s-pmr8r" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.829848 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfkwm\" (UniqueName: \"kubernetes.io/projected/0aaf5c0e-3673-4bfa-a046-feed6a0121d7-kube-api-access-tfkwm\") pod \"frr-k8s-pmr8r\" (UID: \"0aaf5c0e-3673-4bfa-a046-feed6a0121d7\") " pod="metallb-system/frr-k8s-pmr8r" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.914568 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9c551c0f-3df3-4ba6-8bbb-4d996cad9d45-cert\") pod \"frr-k8s-webhook-server-64bf5d555-7xnfb\" (UID: \"9c551c0f-3df3-4ba6-8bbb-4d996cad9d45\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-7xnfb" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.914626 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8f68d749-82ff-45ee-b658-2324015012f7-memberlist\") pod \"speaker-2dxqc\" (UID: \"8f68d749-82ff-45ee-b658-2324015012f7\") " pod="metallb-system/speaker-2dxqc" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.914652 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwb5g\" (UniqueName: \"kubernetes.io/projected/bdee3f99-134e-4020-b9a6-fdc4c66081eb-kube-api-access-wwb5g\") pod \"controller-68d546b9d8-cs2cd\" (UID: \"bdee3f99-134e-4020-b9a6-fdc4c66081eb\") " pod="metallb-system/controller-68d546b9d8-cs2cd" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.914676 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bdee3f99-134e-4020-b9a6-fdc4c66081eb-metrics-certs\") pod \"controller-68d546b9d8-cs2cd\" (UID: \"bdee3f99-134e-4020-b9a6-fdc4c66081eb\") " pod="metallb-system/controller-68d546b9d8-cs2cd" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.914708 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8f68d749-82ff-45ee-b658-2324015012f7-metrics-certs\") pod \"speaker-2dxqc\" (UID: \"8f68d749-82ff-45ee-b658-2324015012f7\") " pod="metallb-system/speaker-2dxqc" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.914741 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bdee3f99-134e-4020-b9a6-fdc4c66081eb-cert\") pod \"controller-68d546b9d8-cs2cd\" (UID: \"bdee3f99-134e-4020-b9a6-fdc4c66081eb\") " pod="metallb-system/controller-68d546b9d8-cs2cd" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.914759 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jhgw\" (UniqueName: \"kubernetes.io/projected/9c551c0f-3df3-4ba6-8bbb-4d996cad9d45-kube-api-access-6jhgw\") pod \"frr-k8s-webhook-server-64bf5d555-7xnfb\" (UID: \"9c551c0f-3df3-4ba6-8bbb-4d996cad9d45\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-7xnfb" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.914779 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8f68d749-82ff-45ee-b658-2324015012f7-metallb-excludel2\") pod \"speaker-2dxqc\" (UID: \"8f68d749-82ff-45ee-b658-2324015012f7\") " pod="metallb-system/speaker-2dxqc" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.914803 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4jf5\" (UniqueName: \"kubernetes.io/projected/8f68d749-82ff-45ee-b658-2324015012f7-kube-api-access-c4jf5\") pod \"speaker-2dxqc\" (UID: \"8f68d749-82ff-45ee-b658-2324015012f7\") " pod="metallb-system/speaker-2dxqc" Oct 14 13:28:35 crc kubenswrapper[4725]: E1014 13:28:35.915825 4725 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Oct 14 13:28:35 crc kubenswrapper[4725]: E1014 13:28:35.915885 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bdee3f99-134e-4020-b9a6-fdc4c66081eb-metrics-certs podName:bdee3f99-134e-4020-b9a6-fdc4c66081eb nodeName:}" failed. No retries permitted until 2025-10-14 13:28:36.415865134 +0000 UTC m=+833.264299943 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bdee3f99-134e-4020-b9a6-fdc4c66081eb-metrics-certs") pod "controller-68d546b9d8-cs2cd" (UID: "bdee3f99-134e-4020-b9a6-fdc4c66081eb") : secret "controller-certs-secret" not found Oct 14 13:28:35 crc kubenswrapper[4725]: E1014 13:28:35.915938 4725 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 14 13:28:35 crc kubenswrapper[4725]: E1014 13:28:35.916014 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8f68d749-82ff-45ee-b658-2324015012f7-memberlist podName:8f68d749-82ff-45ee-b658-2324015012f7 nodeName:}" failed. No retries permitted until 2025-10-14 13:28:36.415991478 +0000 UTC m=+833.264426347 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8f68d749-82ff-45ee-b658-2324015012f7-memberlist") pod "speaker-2dxqc" (UID: "8f68d749-82ff-45ee-b658-2324015012f7") : secret "metallb-memberlist" not found Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.916563 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8f68d749-82ff-45ee-b658-2324015012f7-metallb-excludel2\") pod \"speaker-2dxqc\" (UID: \"8f68d749-82ff-45ee-b658-2324015012f7\") " pod="metallb-system/speaker-2dxqc" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.919515 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9c551c0f-3df3-4ba6-8bbb-4d996cad9d45-cert\") pod \"frr-k8s-webhook-server-64bf5d555-7xnfb\" (UID: \"9c551c0f-3df3-4ba6-8bbb-4d996cad9d45\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-7xnfb" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.919912 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8f68d749-82ff-45ee-b658-2324015012f7-metrics-certs\") pod \"speaker-2dxqc\" (UID: \"8f68d749-82ff-45ee-b658-2324015012f7\") " pod="metallb-system/speaker-2dxqc" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.920581 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bdee3f99-134e-4020-b9a6-fdc4c66081eb-cert\") pod \"controller-68d546b9d8-cs2cd\" (UID: \"bdee3f99-134e-4020-b9a6-fdc4c66081eb\") " pod="metallb-system/controller-68d546b9d8-cs2cd" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.934830 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwb5g\" (UniqueName: \"kubernetes.io/projected/bdee3f99-134e-4020-b9a6-fdc4c66081eb-kube-api-access-wwb5g\") pod \"controller-68d546b9d8-cs2cd\" (UID: \"bdee3f99-134e-4020-b9a6-fdc4c66081eb\") " pod="metallb-system/controller-68d546b9d8-cs2cd" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.937990 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jhgw\" (UniqueName: \"kubernetes.io/projected/9c551c0f-3df3-4ba6-8bbb-4d996cad9d45-kube-api-access-6jhgw\") pod \"frr-k8s-webhook-server-64bf5d555-7xnfb\" (UID: \"9c551c0f-3df3-4ba6-8bbb-4d996cad9d45\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-7xnfb" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.939798 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4jf5\" (UniqueName: \"kubernetes.io/projected/8f68d749-82ff-45ee-b658-2324015012f7-kube-api-access-c4jf5\") pod \"speaker-2dxqc\" (UID: \"8f68d749-82ff-45ee-b658-2324015012f7\") " pod="metallb-system/speaker-2dxqc" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.970829 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-pmr8r" Oct 14 13:28:35 crc kubenswrapper[4725]: I1014 13:28:35.979185 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-7xnfb" Oct 14 13:28:36 crc kubenswrapper[4725]: I1014 13:28:36.260450 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pmr8r" event={"ID":"0aaf5c0e-3673-4bfa-a046-feed6a0121d7","Type":"ContainerStarted","Data":"91e4da9fa2977e9fdedbf1f9999e33790d918a064636307f08d6f479597d41fb"} Oct 14 13:28:36 crc kubenswrapper[4725]: I1014 13:28:36.422898 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-7xnfb"] Oct 14 13:28:36 crc kubenswrapper[4725]: I1014 13:28:36.425717 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8f68d749-82ff-45ee-b658-2324015012f7-memberlist\") pod \"speaker-2dxqc\" (UID: \"8f68d749-82ff-45ee-b658-2324015012f7\") " pod="metallb-system/speaker-2dxqc" Oct 14 13:28:36 crc kubenswrapper[4725]: I1014 13:28:36.425975 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bdee3f99-134e-4020-b9a6-fdc4c66081eb-metrics-certs\") pod \"controller-68d546b9d8-cs2cd\" (UID: \"bdee3f99-134e-4020-b9a6-fdc4c66081eb\") " pod="metallb-system/controller-68d546b9d8-cs2cd" Oct 14 13:28:36 crc kubenswrapper[4725]: E1014 13:28:36.425913 4725 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 14 13:28:36 crc kubenswrapper[4725]: E1014 13:28:36.426096 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8f68d749-82ff-45ee-b658-2324015012f7-memberlist podName:8f68d749-82ff-45ee-b658-2324015012f7 nodeName:}" failed. No retries permitted until 2025-10-14 13:28:37.42607453 +0000 UTC m=+834.274509419 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8f68d749-82ff-45ee-b658-2324015012f7-memberlist") pod "speaker-2dxqc" (UID: "8f68d749-82ff-45ee-b658-2324015012f7") : secret "metallb-memberlist" not found Oct 14 13:28:36 crc kubenswrapper[4725]: I1014 13:28:36.432732 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bdee3f99-134e-4020-b9a6-fdc4c66081eb-metrics-certs\") pod \"controller-68d546b9d8-cs2cd\" (UID: \"bdee3f99-134e-4020-b9a6-fdc4c66081eb\") " pod="metallb-system/controller-68d546b9d8-cs2cd" Oct 14 13:28:36 crc kubenswrapper[4725]: I1014 13:28:36.659282 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cjj4t" Oct 14 13:28:36 crc kubenswrapper[4725]: I1014 13:28:36.659347 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cjj4t" Oct 14 13:28:36 crc kubenswrapper[4725]: I1014 13:28:36.661442 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-cs2cd" Oct 14 13:28:36 crc kubenswrapper[4725]: I1014 13:28:36.706644 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cjj4t" Oct 14 13:28:36 crc kubenswrapper[4725]: I1014 13:28:36.877495 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-cs2cd"] Oct 14 13:28:36 crc kubenswrapper[4725]: W1014 13:28:36.901662 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdee3f99_134e_4020_b9a6_fdc4c66081eb.slice/crio-ea609c80449d1de53da2e332652f8ba342a746d4d09366e16dafef94238b40dc WatchSource:0}: Error finding container ea609c80449d1de53da2e332652f8ba342a746d4d09366e16dafef94238b40dc: Status 404 returned error can't find the container with id ea609c80449d1de53da2e332652f8ba342a746d4d09366e16dafef94238b40dc Oct 14 13:28:37 crc kubenswrapper[4725]: I1014 13:28:37.268084 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-cs2cd" event={"ID":"bdee3f99-134e-4020-b9a6-fdc4c66081eb","Type":"ContainerStarted","Data":"2d17e0ae41ca7951c2eb574a47b9654698b76be542dec3bb1a773b0528940b56"} Oct 14 13:28:37 crc kubenswrapper[4725]: I1014 13:28:37.268540 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-68d546b9d8-cs2cd" Oct 14 13:28:37 crc kubenswrapper[4725]: I1014 13:28:37.268556 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-cs2cd" event={"ID":"bdee3f99-134e-4020-b9a6-fdc4c66081eb","Type":"ContainerStarted","Data":"f2fb180c2366baa44ddffc8160e312bccb54d1be015da2486cc502765457dcda"} Oct 14 13:28:37 crc kubenswrapper[4725]: I1014 13:28:37.268568 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-cs2cd" event={"ID":"bdee3f99-134e-4020-b9a6-fdc4c66081eb","Type":"ContainerStarted","Data":"ea609c80449d1de53da2e332652f8ba342a746d4d09366e16dafef94238b40dc"} Oct 14 13:28:37 crc kubenswrapper[4725]: I1014 13:28:37.269974 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-7xnfb" event={"ID":"9c551c0f-3df3-4ba6-8bbb-4d996cad9d45","Type":"ContainerStarted","Data":"dffa5f6e09dda6039715cb990df4a7260988edc0854b71ec626eb982caedf2e6"} Oct 14 13:28:37 crc kubenswrapper[4725]: I1014 13:28:37.286558 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-68d546b9d8-cs2cd" podStartSLOduration=2.2865396430000002 podStartE2EDuration="2.286539643s" podCreationTimestamp="2025-10-14 13:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:28:37.283577082 +0000 UTC m=+834.132011891" watchObservedRunningTime="2025-10-14 13:28:37.286539643 +0000 UTC m=+834.134974452" Oct 14 13:28:37 crc kubenswrapper[4725]: I1014 13:28:37.313625 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cjj4t" Oct 14 13:28:37 crc kubenswrapper[4725]: I1014 13:28:37.363333 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cjj4t"] Oct 14 13:28:37 crc kubenswrapper[4725]: I1014 13:28:37.441595 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8f68d749-82ff-45ee-b658-2324015012f7-memberlist\") pod \"speaker-2dxqc\" (UID: \"8f68d749-82ff-45ee-b658-2324015012f7\") " pod="metallb-system/speaker-2dxqc" Oct 14 13:28:37 crc kubenswrapper[4725]: I1014 13:28:37.449152 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8f68d749-82ff-45ee-b658-2324015012f7-memberlist\") pod \"speaker-2dxqc\" (UID: \"8f68d749-82ff-45ee-b658-2324015012f7\") " pod="metallb-system/speaker-2dxqc" Oct 14 13:28:37 crc kubenswrapper[4725]: I1014 13:28:37.524662 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2dxqc" Oct 14 13:28:37 crc kubenswrapper[4725]: W1014 13:28:37.549405 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f68d749_82ff_45ee_b658_2324015012f7.slice/crio-82b0d77a7533c24d316d2f588fa6b23b16e62cf243db8b70d10a56bfc512593c WatchSource:0}: Error finding container 82b0d77a7533c24d316d2f588fa6b23b16e62cf243db8b70d10a56bfc512593c: Status 404 returned error can't find the container with id 82b0d77a7533c24d316d2f588fa6b23b16e62cf243db8b70d10a56bfc512593c Oct 14 13:28:38 crc kubenswrapper[4725]: I1014 13:28:38.276715 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2dxqc" event={"ID":"8f68d749-82ff-45ee-b658-2324015012f7","Type":"ContainerStarted","Data":"a94f4ec01562a03a03b93987fc246665f63807ce0741431a14acf67a024054b3"} Oct 14 13:28:38 crc kubenswrapper[4725]: I1014 13:28:38.277059 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2dxqc" event={"ID":"8f68d749-82ff-45ee-b658-2324015012f7","Type":"ContainerStarted","Data":"05ef2d554a1041cb59d74341f45e6dc8b6ccd38561ddc6b04287ccd2a85868f4"} Oct 14 13:28:38 crc kubenswrapper[4725]: I1014 13:28:38.277083 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2dxqc" event={"ID":"8f68d749-82ff-45ee-b658-2324015012f7","Type":"ContainerStarted","Data":"82b0d77a7533c24d316d2f588fa6b23b16e62cf243db8b70d10a56bfc512593c"} Oct 14 13:28:38 crc kubenswrapper[4725]: I1014 13:28:38.277226 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-2dxqc" Oct 14 13:28:38 crc kubenswrapper[4725]: I1014 13:28:38.299986 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-2dxqc" podStartSLOduration=3.299965154 podStartE2EDuration="3.299965154s" podCreationTimestamp="2025-10-14 13:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:28:38.297558048 +0000 UTC m=+835.145992867" watchObservedRunningTime="2025-10-14 13:28:38.299965154 +0000 UTC m=+835.148399963" Oct 14 13:28:39 crc kubenswrapper[4725]: I1014 13:28:39.284881 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cjj4t" podUID="b6b76e82-b82a-4e08-9d33-efb2d203fe56" containerName="registry-server" containerID="cri-o://786c3e51479c2fdfe964a5b04a691db0ad7d9b0a3b84610969f53d1ee51003ce" gracePeriod=2 Oct 14 13:28:39 crc kubenswrapper[4725]: I1014 13:28:39.695725 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cjj4t" Oct 14 13:28:39 crc kubenswrapper[4725]: I1014 13:28:39.775209 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b76e82-b82a-4e08-9d33-efb2d203fe56-catalog-content\") pod \"b6b76e82-b82a-4e08-9d33-efb2d203fe56\" (UID: \"b6b76e82-b82a-4e08-9d33-efb2d203fe56\") " Oct 14 13:28:39 crc kubenswrapper[4725]: I1014 13:28:39.775296 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b76e82-b82a-4e08-9d33-efb2d203fe56-utilities\") pod \"b6b76e82-b82a-4e08-9d33-efb2d203fe56\" (UID: \"b6b76e82-b82a-4e08-9d33-efb2d203fe56\") " Oct 14 13:28:39 crc kubenswrapper[4725]: I1014 13:28:39.775330 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fv5k\" (UniqueName: \"kubernetes.io/projected/b6b76e82-b82a-4e08-9d33-efb2d203fe56-kube-api-access-5fv5k\") pod \"b6b76e82-b82a-4e08-9d33-efb2d203fe56\" (UID: \"b6b76e82-b82a-4e08-9d33-efb2d203fe56\") " Oct 14 13:28:39 crc kubenswrapper[4725]: I1014 13:28:39.776956 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6b76e82-b82a-4e08-9d33-efb2d203fe56-utilities" (OuterVolumeSpecName: "utilities") pod "b6b76e82-b82a-4e08-9d33-efb2d203fe56" (UID: "b6b76e82-b82a-4e08-9d33-efb2d203fe56"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:28:39 crc kubenswrapper[4725]: I1014 13:28:39.782324 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6b76e82-b82a-4e08-9d33-efb2d203fe56-kube-api-access-5fv5k" (OuterVolumeSpecName: "kube-api-access-5fv5k") pod "b6b76e82-b82a-4e08-9d33-efb2d203fe56" (UID: "b6b76e82-b82a-4e08-9d33-efb2d203fe56"). InnerVolumeSpecName "kube-api-access-5fv5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:28:39 crc kubenswrapper[4725]: I1014 13:28:39.841203 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6b76e82-b82a-4e08-9d33-efb2d203fe56-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6b76e82-b82a-4e08-9d33-efb2d203fe56" (UID: "b6b76e82-b82a-4e08-9d33-efb2d203fe56"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:28:39 crc kubenswrapper[4725]: I1014 13:28:39.876610 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fv5k\" (UniqueName: \"kubernetes.io/projected/b6b76e82-b82a-4e08-9d33-efb2d203fe56-kube-api-access-5fv5k\") on node \"crc\" DevicePath \"\"" Oct 14 13:28:39 crc kubenswrapper[4725]: I1014 13:28:39.876642 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b76e82-b82a-4e08-9d33-efb2d203fe56-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 13:28:39 crc kubenswrapper[4725]: I1014 13:28:39.876652 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b76e82-b82a-4e08-9d33-efb2d203fe56-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 13:28:40 crc kubenswrapper[4725]: I1014 13:28:40.292800 4725 generic.go:334] "Generic (PLEG): container finished" podID="b6b76e82-b82a-4e08-9d33-efb2d203fe56" containerID="786c3e51479c2fdfe964a5b04a691db0ad7d9b0a3b84610969f53d1ee51003ce" exitCode=0 Oct 14 13:28:40 crc kubenswrapper[4725]: I1014 13:28:40.292904 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjj4t" event={"ID":"b6b76e82-b82a-4e08-9d33-efb2d203fe56","Type":"ContainerDied","Data":"786c3e51479c2fdfe964a5b04a691db0ad7d9b0a3b84610969f53d1ee51003ce"} Oct 14 13:28:40 crc kubenswrapper[4725]: I1014 13:28:40.293066 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjj4t" event={"ID":"b6b76e82-b82a-4e08-9d33-efb2d203fe56","Type":"ContainerDied","Data":"c37bd952314c2748920c80822e869e51ae36bf94dcf9df0b6d6e32a21967651b"} Oct 14 13:28:40 crc kubenswrapper[4725]: I1014 13:28:40.293083 4725 scope.go:117] "RemoveContainer" containerID="786c3e51479c2fdfe964a5b04a691db0ad7d9b0a3b84610969f53d1ee51003ce" Oct 14 13:28:40 crc kubenswrapper[4725]: I1014 13:28:40.292928 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cjj4t" Oct 14 13:28:40 crc kubenswrapper[4725]: I1014 13:28:40.311315 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cjj4t"] Oct 14 13:28:40 crc kubenswrapper[4725]: I1014 13:28:40.311693 4725 scope.go:117] "RemoveContainer" containerID="4c51ef184cf43ba1ebb3a421d74c6a4d6f55df26d263d489e0f978e3fa4d6f81" Oct 14 13:28:40 crc kubenswrapper[4725]: I1014 13:28:40.316567 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cjj4t"] Oct 14 13:28:40 crc kubenswrapper[4725]: I1014 13:28:40.335149 4725 scope.go:117] "RemoveContainer" containerID="9695a46280f096ad78bc76987bd103313714b5b7fb242667eba51b3f3829416a" Oct 14 13:28:40 crc kubenswrapper[4725]: I1014 13:28:40.362937 4725 scope.go:117] "RemoveContainer" containerID="786c3e51479c2fdfe964a5b04a691db0ad7d9b0a3b84610969f53d1ee51003ce" Oct 14 13:28:40 crc kubenswrapper[4725]: E1014 13:28:40.363437 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"786c3e51479c2fdfe964a5b04a691db0ad7d9b0a3b84610969f53d1ee51003ce\": container with ID starting with 786c3e51479c2fdfe964a5b04a691db0ad7d9b0a3b84610969f53d1ee51003ce not found: ID does not exist" containerID="786c3e51479c2fdfe964a5b04a691db0ad7d9b0a3b84610969f53d1ee51003ce" Oct 14 13:28:40 crc kubenswrapper[4725]: I1014 13:28:40.363508 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"786c3e51479c2fdfe964a5b04a691db0ad7d9b0a3b84610969f53d1ee51003ce"} err="failed to get container status \"786c3e51479c2fdfe964a5b04a691db0ad7d9b0a3b84610969f53d1ee51003ce\": rpc error: code = NotFound desc = could not find container \"786c3e51479c2fdfe964a5b04a691db0ad7d9b0a3b84610969f53d1ee51003ce\": container with ID starting with 786c3e51479c2fdfe964a5b04a691db0ad7d9b0a3b84610969f53d1ee51003ce not found: ID does not exist" Oct 14 13:28:40 crc kubenswrapper[4725]: I1014 13:28:40.363543 4725 scope.go:117] "RemoveContainer" containerID="4c51ef184cf43ba1ebb3a421d74c6a4d6f55df26d263d489e0f978e3fa4d6f81" Oct 14 13:28:40 crc kubenswrapper[4725]: E1014 13:28:40.363905 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c51ef184cf43ba1ebb3a421d74c6a4d6f55df26d263d489e0f978e3fa4d6f81\": container with ID starting with 4c51ef184cf43ba1ebb3a421d74c6a4d6f55df26d263d489e0f978e3fa4d6f81 not found: ID does not exist" containerID="4c51ef184cf43ba1ebb3a421d74c6a4d6f55df26d263d489e0f978e3fa4d6f81" Oct 14 13:28:40 crc kubenswrapper[4725]: I1014 13:28:40.363939 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c51ef184cf43ba1ebb3a421d74c6a4d6f55df26d263d489e0f978e3fa4d6f81"} err="failed to get container status \"4c51ef184cf43ba1ebb3a421d74c6a4d6f55df26d263d489e0f978e3fa4d6f81\": rpc error: code = NotFound desc = could not find container \"4c51ef184cf43ba1ebb3a421d74c6a4d6f55df26d263d489e0f978e3fa4d6f81\": container with ID starting with 4c51ef184cf43ba1ebb3a421d74c6a4d6f55df26d263d489e0f978e3fa4d6f81 not found: ID does not exist" Oct 14 13:28:40 crc kubenswrapper[4725]: I1014 13:28:40.363957 4725 scope.go:117] "RemoveContainer" containerID="9695a46280f096ad78bc76987bd103313714b5b7fb242667eba51b3f3829416a" Oct 14 13:28:40 crc kubenswrapper[4725]: E1014 13:28:40.364264 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9695a46280f096ad78bc76987bd103313714b5b7fb242667eba51b3f3829416a\": container with ID starting with 9695a46280f096ad78bc76987bd103313714b5b7fb242667eba51b3f3829416a not found: ID does not exist" containerID="9695a46280f096ad78bc76987bd103313714b5b7fb242667eba51b3f3829416a" Oct 14 13:28:40 crc kubenswrapper[4725]: I1014 13:28:40.364288 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9695a46280f096ad78bc76987bd103313714b5b7fb242667eba51b3f3829416a"} err="failed to get container status \"9695a46280f096ad78bc76987bd103313714b5b7fb242667eba51b3f3829416a\": rpc error: code = NotFound desc = could not find container \"9695a46280f096ad78bc76987bd103313714b5b7fb242667eba51b3f3829416a\": container with ID starting with 9695a46280f096ad78bc76987bd103313714b5b7fb242667eba51b3f3829416a not found: ID does not exist" Oct 14 13:28:41 crc kubenswrapper[4725]: I1014 13:28:41.929656 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6b76e82-b82a-4e08-9d33-efb2d203fe56" path="/var/lib/kubelet/pods/b6b76e82-b82a-4e08-9d33-efb2d203fe56/volumes" Oct 14 13:28:45 crc kubenswrapper[4725]: I1014 13:28:45.326779 4725 generic.go:334] "Generic (PLEG): container finished" podID="0aaf5c0e-3673-4bfa-a046-feed6a0121d7" containerID="81fd817e5ed5f87d55c872d208c3b5270f1ec9bc4210065ff635ca40f0a83bb9" exitCode=0 Oct 14 13:28:45 crc kubenswrapper[4725]: I1014 13:28:45.327366 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pmr8r" event={"ID":"0aaf5c0e-3673-4bfa-a046-feed6a0121d7","Type":"ContainerDied","Data":"81fd817e5ed5f87d55c872d208c3b5270f1ec9bc4210065ff635ca40f0a83bb9"} Oct 14 13:28:45 crc kubenswrapper[4725]: I1014 13:28:45.328878 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-7xnfb" event={"ID":"9c551c0f-3df3-4ba6-8bbb-4d996cad9d45","Type":"ContainerStarted","Data":"8b7c9966e85753197d7cb5081ca4c8b3d08e05835e57b52d3fea42213fcb7b8c"} Oct 14 13:28:45 crc kubenswrapper[4725]: I1014 13:28:45.329015 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-7xnfb" Oct 14 13:28:45 crc kubenswrapper[4725]: I1014 13:28:45.384704 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-7xnfb" podStartSLOduration=2.625496642 podStartE2EDuration="10.384676869s" podCreationTimestamp="2025-10-14 13:28:35 +0000 UTC" firstStartedPulling="2025-10-14 13:28:36.432526926 +0000 UTC m=+833.280961755" lastFinishedPulling="2025-10-14 13:28:44.191707183 +0000 UTC m=+841.040141982" observedRunningTime="2025-10-14 13:28:45.380556546 +0000 UTC m=+842.228991375" watchObservedRunningTime="2025-10-14 13:28:45.384676869 +0000 UTC m=+842.233111688" Oct 14 13:28:46 crc kubenswrapper[4725]: I1014 13:28:46.336214 4725 generic.go:334] "Generic (PLEG): container finished" podID="0aaf5c0e-3673-4bfa-a046-feed6a0121d7" containerID="49cdc176fd10404fc522486371d4ff22cf204ed17f7712bded2046e5e0e0fa57" exitCode=0 Oct 14 13:28:46 crc kubenswrapper[4725]: I1014 13:28:46.336319 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pmr8r" event={"ID":"0aaf5c0e-3673-4bfa-a046-feed6a0121d7","Type":"ContainerDied","Data":"49cdc176fd10404fc522486371d4ff22cf204ed17f7712bded2046e5e0e0fa57"} Oct 14 13:28:47 crc kubenswrapper[4725]: I1014 13:28:47.344170 4725 generic.go:334] "Generic (PLEG): container finished" podID="0aaf5c0e-3673-4bfa-a046-feed6a0121d7" containerID="be415318c8dedf3cf65f4051837329a0ac1567bbfab18affd27cd6fa7c000f0d" exitCode=0 Oct 14 13:28:47 crc kubenswrapper[4725]: I1014 13:28:47.344265 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pmr8r" event={"ID":"0aaf5c0e-3673-4bfa-a046-feed6a0121d7","Type":"ContainerDied","Data":"be415318c8dedf3cf65f4051837329a0ac1567bbfab18affd27cd6fa7c000f0d"} Oct 14 13:28:47 crc kubenswrapper[4725]: I1014 13:28:47.527904 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-2dxqc" Oct 14 13:28:48 crc kubenswrapper[4725]: I1014 13:28:48.356371 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pmr8r" event={"ID":"0aaf5c0e-3673-4bfa-a046-feed6a0121d7","Type":"ContainerStarted","Data":"e78997140282522dd097839d405b45f4c459f9e0b602754d901ee7c7d51d3e86"} Oct 14 13:28:48 crc kubenswrapper[4725]: I1014 13:28:48.356765 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pmr8r" event={"ID":"0aaf5c0e-3673-4bfa-a046-feed6a0121d7","Type":"ContainerStarted","Data":"fb58de32fcaa8456f82d784b9a0bdef760c0fa924b0662ef1e6970678ce7efc7"} Oct 14 13:28:48 crc kubenswrapper[4725]: I1014 13:28:48.356780 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pmr8r" event={"ID":"0aaf5c0e-3673-4bfa-a046-feed6a0121d7","Type":"ContainerStarted","Data":"6800d0bed2b4fcec24caa1897a072328f47dba7f8b8086dde2fa0e9f700fc6c2"} Oct 14 13:28:48 crc kubenswrapper[4725]: I1014 13:28:48.356793 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pmr8r" event={"ID":"0aaf5c0e-3673-4bfa-a046-feed6a0121d7","Type":"ContainerStarted","Data":"1dae183ec5b98982e573e43744af9be5a79fa1ca0b583bc10950226f59bfa4c1"} Oct 14 13:28:48 crc kubenswrapper[4725]: I1014 13:28:48.356805 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pmr8r" event={"ID":"0aaf5c0e-3673-4bfa-a046-feed6a0121d7","Type":"ContainerStarted","Data":"6a7fed53c5745965ba6eabe0ca31b47d77714e07bac07ad0b21b82e10ed8c206"} Oct 14 13:28:49 crc kubenswrapper[4725]: I1014 13:28:49.368031 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pmr8r" event={"ID":"0aaf5c0e-3673-4bfa-a046-feed6a0121d7","Type":"ContainerStarted","Data":"342e3ab09f98bf49d15a3d71784b2b915c603b9a89d3b3c385917b643caf3352"} Oct 14 13:28:49 crc kubenswrapper[4725]: I1014 13:28:49.368213 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-pmr8r" Oct 14 13:28:50 crc kubenswrapper[4725]: I1014 13:28:50.287540 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-pmr8r" podStartSLOduration=7.229865942 podStartE2EDuration="15.287503482s" podCreationTimestamp="2025-10-14 13:28:35 +0000 UTC" firstStartedPulling="2025-10-14 13:28:36.12844133 +0000 UTC m=+832.976876139" lastFinishedPulling="2025-10-14 13:28:44.18607887 +0000 UTC m=+841.034513679" observedRunningTime="2025-10-14 13:28:49.417255995 +0000 UTC m=+846.265690814" watchObservedRunningTime="2025-10-14 13:28:50.287503482 +0000 UTC m=+847.135938351" Oct 14 13:28:50 crc kubenswrapper[4725]: I1014 13:28:50.291618 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f656c"] Oct 14 13:28:50 crc kubenswrapper[4725]: E1014 13:28:50.292287 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b76e82-b82a-4e08-9d33-efb2d203fe56" containerName="extract-content" Oct 14 13:28:50 crc kubenswrapper[4725]: I1014 13:28:50.292316 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b76e82-b82a-4e08-9d33-efb2d203fe56" containerName="extract-content" Oct 14 13:28:50 crc kubenswrapper[4725]: E1014 13:28:50.292337 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b76e82-b82a-4e08-9d33-efb2d203fe56" containerName="extract-utilities" Oct 14 13:28:50 crc kubenswrapper[4725]: I1014 13:28:50.292350 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b76e82-b82a-4e08-9d33-efb2d203fe56" containerName="extract-utilities" Oct 14 13:28:50 crc kubenswrapper[4725]: E1014 13:28:50.292372 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b76e82-b82a-4e08-9d33-efb2d203fe56" containerName="registry-server" Oct 14 13:28:50 crc kubenswrapper[4725]: I1014 13:28:50.292385 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b76e82-b82a-4e08-9d33-efb2d203fe56" containerName="registry-server" Oct 14 13:28:50 crc kubenswrapper[4725]: I1014 13:28:50.292625 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6b76e82-b82a-4e08-9d33-efb2d203fe56" containerName="registry-server" Oct 14 13:28:50 crc kubenswrapper[4725]: I1014 13:28:50.293976 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f656c" Oct 14 13:28:50 crc kubenswrapper[4725]: I1014 13:28:50.310046 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f656c"] Oct 14 13:28:50 crc kubenswrapper[4725]: I1014 13:28:50.440345 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tg86\" (UniqueName: \"kubernetes.io/projected/02496c8e-59e0-4efb-82d8-51a4de36d04f-kube-api-access-9tg86\") pod \"certified-operators-f656c\" (UID: \"02496c8e-59e0-4efb-82d8-51a4de36d04f\") " pod="openshift-marketplace/certified-operators-f656c" Oct 14 13:28:50 crc kubenswrapper[4725]: I1014 13:28:50.440430 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02496c8e-59e0-4efb-82d8-51a4de36d04f-utilities\") pod \"certified-operators-f656c\" (UID: \"02496c8e-59e0-4efb-82d8-51a4de36d04f\") " pod="openshift-marketplace/certified-operators-f656c" Oct 14 13:28:50 crc kubenswrapper[4725]: I1014 13:28:50.440598 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02496c8e-59e0-4efb-82d8-51a4de36d04f-catalog-content\") pod \"certified-operators-f656c\" (UID: \"02496c8e-59e0-4efb-82d8-51a4de36d04f\") " pod="openshift-marketplace/certified-operators-f656c" Oct 14 13:28:50 crc kubenswrapper[4725]: I1014 13:28:50.541878 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02496c8e-59e0-4efb-82d8-51a4de36d04f-utilities\") pod \"certified-operators-f656c\" (UID: \"02496c8e-59e0-4efb-82d8-51a4de36d04f\") " pod="openshift-marketplace/certified-operators-f656c" Oct 14 13:28:50 crc kubenswrapper[4725]: I1014 13:28:50.541934 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02496c8e-59e0-4efb-82d8-51a4de36d04f-catalog-content\") pod \"certified-operators-f656c\" (UID: \"02496c8e-59e0-4efb-82d8-51a4de36d04f\") " pod="openshift-marketplace/certified-operators-f656c" Oct 14 13:28:50 crc kubenswrapper[4725]: I1014 13:28:50.542019 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tg86\" (UniqueName: \"kubernetes.io/projected/02496c8e-59e0-4efb-82d8-51a4de36d04f-kube-api-access-9tg86\") pod \"certified-operators-f656c\" (UID: \"02496c8e-59e0-4efb-82d8-51a4de36d04f\") " pod="openshift-marketplace/certified-operators-f656c" Oct 14 13:28:50 crc kubenswrapper[4725]: I1014 13:28:50.542429 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02496c8e-59e0-4efb-82d8-51a4de36d04f-utilities\") pod \"certified-operators-f656c\" (UID: \"02496c8e-59e0-4efb-82d8-51a4de36d04f\") " pod="openshift-marketplace/certified-operators-f656c" Oct 14 13:28:50 crc kubenswrapper[4725]: I1014 13:28:50.542574 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02496c8e-59e0-4efb-82d8-51a4de36d04f-catalog-content\") pod \"certified-operators-f656c\" (UID: \"02496c8e-59e0-4efb-82d8-51a4de36d04f\") " pod="openshift-marketplace/certified-operators-f656c" Oct 14 13:28:50 crc kubenswrapper[4725]: I1014 13:28:50.562274 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tg86\" (UniqueName: \"kubernetes.io/projected/02496c8e-59e0-4efb-82d8-51a4de36d04f-kube-api-access-9tg86\") pod \"certified-operators-f656c\" (UID: \"02496c8e-59e0-4efb-82d8-51a4de36d04f\") " pod="openshift-marketplace/certified-operators-f656c" Oct 14 13:28:50 crc kubenswrapper[4725]: I1014 13:28:50.627139 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f656c" Oct 14 13:28:50 crc kubenswrapper[4725]: I1014 13:28:50.971464 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-pmr8r" Oct 14 13:28:51 crc kubenswrapper[4725]: I1014 13:28:51.015647 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-pmr8r" Oct 14 13:28:51 crc kubenswrapper[4725]: I1014 13:28:51.123776 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f656c"] Oct 14 13:28:51 crc kubenswrapper[4725]: I1014 13:28:51.380633 4725 generic.go:334] "Generic (PLEG): container finished" podID="02496c8e-59e0-4efb-82d8-51a4de36d04f" containerID="54d4444f072de249679fbfde3cd6b8e5c00a228184e260183c9f79301d1470aa" exitCode=0 Oct 14 13:28:51 crc kubenswrapper[4725]: I1014 13:28:51.380685 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f656c" event={"ID":"02496c8e-59e0-4efb-82d8-51a4de36d04f","Type":"ContainerDied","Data":"54d4444f072de249679fbfde3cd6b8e5c00a228184e260183c9f79301d1470aa"} Oct 14 13:28:51 crc kubenswrapper[4725]: I1014 13:28:51.380732 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f656c" event={"ID":"02496c8e-59e0-4efb-82d8-51a4de36d04f","Type":"ContainerStarted","Data":"b9c49d27586ab330f930e1aabe58a6190c03292e81bf600373f7d5cbd442398a"} Oct 14 13:28:53 crc kubenswrapper[4725]: I1014 13:28:53.401390 4725 generic.go:334] "Generic (PLEG): container finished" podID="02496c8e-59e0-4efb-82d8-51a4de36d04f" containerID="15742252cf45237821fbbbf1b74ffcb0433fd2a9bb4fba048f5dd029caf6e826" exitCode=0 Oct 14 13:28:53 crc kubenswrapper[4725]: I1014 13:28:53.401503 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f656c" event={"ID":"02496c8e-59e0-4efb-82d8-51a4de36d04f","Type":"ContainerDied","Data":"15742252cf45237821fbbbf1b74ffcb0433fd2a9bb4fba048f5dd029caf6e826"} Oct 14 13:28:53 crc kubenswrapper[4725]: I1014 13:28:53.876734 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-v8nhf"] Oct 14 13:28:53 crc kubenswrapper[4725]: I1014 13:28:53.877630 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v8nhf" Oct 14 13:28:53 crc kubenswrapper[4725]: I1014 13:28:53.887248 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-8v6qx" Oct 14 13:28:53 crc kubenswrapper[4725]: I1014 13:28:53.887546 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Oct 14 13:28:53 crc kubenswrapper[4725]: I1014 13:28:53.890489 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Oct 14 13:28:53 crc kubenswrapper[4725]: I1014 13:28:53.897719 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-v8nhf"] Oct 14 13:28:53 crc kubenswrapper[4725]: I1014 13:28:53.991615 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kktrx\" (UniqueName: \"kubernetes.io/projected/4473a60c-84a6-440a-ad02-ed3331345270-kube-api-access-kktrx\") pod \"openstack-operator-index-v8nhf\" (UID: \"4473a60c-84a6-440a-ad02-ed3331345270\") " pod="openstack-operators/openstack-operator-index-v8nhf" Oct 14 13:28:54 crc kubenswrapper[4725]: I1014 13:28:54.093460 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kktrx\" (UniqueName: \"kubernetes.io/projected/4473a60c-84a6-440a-ad02-ed3331345270-kube-api-access-kktrx\") pod \"openstack-operator-index-v8nhf\" (UID: \"4473a60c-84a6-440a-ad02-ed3331345270\") " pod="openstack-operators/openstack-operator-index-v8nhf" Oct 14 13:28:54 crc kubenswrapper[4725]: I1014 13:28:54.118616 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kktrx\" (UniqueName: \"kubernetes.io/projected/4473a60c-84a6-440a-ad02-ed3331345270-kube-api-access-kktrx\") pod \"openstack-operator-index-v8nhf\" (UID: \"4473a60c-84a6-440a-ad02-ed3331345270\") " pod="openstack-operators/openstack-operator-index-v8nhf" Oct 14 13:28:54 crc kubenswrapper[4725]: I1014 13:28:54.195108 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v8nhf" Oct 14 13:28:54 crc kubenswrapper[4725]: I1014 13:28:54.413659 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f656c" event={"ID":"02496c8e-59e0-4efb-82d8-51a4de36d04f","Type":"ContainerStarted","Data":"7c61d7e9a1736a10071fce1341663e87013341d6df7e632d0040f23448d7a023"} Oct 14 13:28:54 crc kubenswrapper[4725]: I1014 13:28:54.621358 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f656c" podStartSLOduration=2.095856333 podStartE2EDuration="4.621336955s" podCreationTimestamp="2025-10-14 13:28:50 +0000 UTC" firstStartedPulling="2025-10-14 13:28:51.38239276 +0000 UTC m=+848.230827569" lastFinishedPulling="2025-10-14 13:28:53.907873342 +0000 UTC m=+850.756308191" observedRunningTime="2025-10-14 13:28:54.433443985 +0000 UTC m=+851.281878804" watchObservedRunningTime="2025-10-14 13:28:54.621336955 +0000 UTC m=+851.469771764" Oct 14 13:28:54 crc kubenswrapper[4725]: I1014 13:28:54.622990 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-v8nhf"] Oct 14 13:28:54 crc kubenswrapper[4725]: W1014 13:28:54.632101 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4473a60c_84a6_440a_ad02_ed3331345270.slice/crio-3d0bbcd585a48e64742a6270f7242d853a0a5f71a36cfce4fbe9e2ffdd04aa31 WatchSource:0}: Error finding container 3d0bbcd585a48e64742a6270f7242d853a0a5f71a36cfce4fbe9e2ffdd04aa31: Status 404 returned error can't find the container with id 3d0bbcd585a48e64742a6270f7242d853a0a5f71a36cfce4fbe9e2ffdd04aa31 Oct 14 13:28:55 crc kubenswrapper[4725]: I1014 13:28:55.422100 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v8nhf" event={"ID":"4473a60c-84a6-440a-ad02-ed3331345270","Type":"ContainerStarted","Data":"3d0bbcd585a48e64742a6270f7242d853a0a5f71a36cfce4fbe9e2ffdd04aa31"} Oct 14 13:28:55 crc kubenswrapper[4725]: I1014 13:28:55.987970 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-7xnfb" Oct 14 13:28:56 crc kubenswrapper[4725]: I1014 13:28:56.665519 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-68d546b9d8-cs2cd" Oct 14 13:28:57 crc kubenswrapper[4725]: I1014 13:28:57.437464 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v8nhf" event={"ID":"4473a60c-84a6-440a-ad02-ed3331345270","Type":"ContainerStarted","Data":"57641dfedded9c24647ee3b3791bcfb0653ac98c792e18c1271322ec9e5d609a"} Oct 14 13:28:57 crc kubenswrapper[4725]: I1014 13:28:57.466844 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-v8nhf" podStartSLOduration=2.101369703 podStartE2EDuration="4.466820824s" podCreationTimestamp="2025-10-14 13:28:53 +0000 UTC" firstStartedPulling="2025-10-14 13:28:54.636463691 +0000 UTC m=+851.484898500" lastFinishedPulling="2025-10-14 13:28:57.001914792 +0000 UTC m=+853.850349621" observedRunningTime="2025-10-14 13:28:57.464679745 +0000 UTC m=+854.313114604" watchObservedRunningTime="2025-10-14 13:28:57.466820824 +0000 UTC m=+854.315255633" Oct 14 13:29:00 crc kubenswrapper[4725]: I1014 13:29:00.627339 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f656c" Oct 14 13:29:00 crc kubenswrapper[4725]: I1014 13:29:00.627387 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f656c" Oct 14 13:29:00 crc kubenswrapper[4725]: I1014 13:29:00.700193 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f656c" Oct 14 13:29:01 crc kubenswrapper[4725]: I1014 13:29:01.526554 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f656c" Oct 14 13:29:04 crc kubenswrapper[4725]: I1014 13:29:04.195578 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-v8nhf" Oct 14 13:29:04 crc kubenswrapper[4725]: I1014 13:29:04.197183 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-v8nhf" Oct 14 13:29:04 crc kubenswrapper[4725]: I1014 13:29:04.225726 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-v8nhf" Oct 14 13:29:04 crc kubenswrapper[4725]: I1014 13:29:04.536136 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-v8nhf" Oct 14 13:29:05 crc kubenswrapper[4725]: I1014 13:29:05.076944 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f656c"] Oct 14 13:29:05 crc kubenswrapper[4725]: I1014 13:29:05.077325 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f656c" podUID="02496c8e-59e0-4efb-82d8-51a4de36d04f" containerName="registry-server" containerID="cri-o://7c61d7e9a1736a10071fce1341663e87013341d6df7e632d0040f23448d7a023" gracePeriod=2 Oct 14 13:29:05 crc kubenswrapper[4725]: I1014 13:29:05.506807 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f656c" Oct 14 13:29:05 crc kubenswrapper[4725]: I1014 13:29:05.515681 4725 generic.go:334] "Generic (PLEG): container finished" podID="02496c8e-59e0-4efb-82d8-51a4de36d04f" containerID="7c61d7e9a1736a10071fce1341663e87013341d6df7e632d0040f23448d7a023" exitCode=0 Oct 14 13:29:05 crc kubenswrapper[4725]: I1014 13:29:05.516317 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f656c" Oct 14 13:29:05 crc kubenswrapper[4725]: I1014 13:29:05.516444 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f656c" event={"ID":"02496c8e-59e0-4efb-82d8-51a4de36d04f","Type":"ContainerDied","Data":"7c61d7e9a1736a10071fce1341663e87013341d6df7e632d0040f23448d7a023"} Oct 14 13:29:05 crc kubenswrapper[4725]: I1014 13:29:05.516492 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f656c" event={"ID":"02496c8e-59e0-4efb-82d8-51a4de36d04f","Type":"ContainerDied","Data":"b9c49d27586ab330f930e1aabe58a6190c03292e81bf600373f7d5cbd442398a"} Oct 14 13:29:05 crc kubenswrapper[4725]: I1014 13:29:05.516508 4725 scope.go:117] "RemoveContainer" containerID="7c61d7e9a1736a10071fce1341663e87013341d6df7e632d0040f23448d7a023" Oct 14 13:29:05 crc kubenswrapper[4725]: I1014 13:29:05.535625 4725 scope.go:117] "RemoveContainer" containerID="15742252cf45237821fbbbf1b74ffcb0433fd2a9bb4fba048f5dd029caf6e826" Oct 14 13:29:05 crc kubenswrapper[4725]: I1014 13:29:05.574159 4725 scope.go:117] "RemoveContainer" containerID="54d4444f072de249679fbfde3cd6b8e5c00a228184e260183c9f79301d1470aa" Oct 14 13:29:05 crc kubenswrapper[4725]: I1014 13:29:05.594270 4725 scope.go:117] "RemoveContainer" containerID="7c61d7e9a1736a10071fce1341663e87013341d6df7e632d0040f23448d7a023" Oct 14 13:29:05 crc kubenswrapper[4725]: E1014 13:29:05.594757 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c61d7e9a1736a10071fce1341663e87013341d6df7e632d0040f23448d7a023\": container with ID starting with 7c61d7e9a1736a10071fce1341663e87013341d6df7e632d0040f23448d7a023 not found: ID does not exist" containerID="7c61d7e9a1736a10071fce1341663e87013341d6df7e632d0040f23448d7a023" Oct 14 13:29:05 crc kubenswrapper[4725]: I1014 13:29:05.594846 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c61d7e9a1736a10071fce1341663e87013341d6df7e632d0040f23448d7a023"} err="failed to get container status \"7c61d7e9a1736a10071fce1341663e87013341d6df7e632d0040f23448d7a023\": rpc error: code = NotFound desc = could not find container \"7c61d7e9a1736a10071fce1341663e87013341d6df7e632d0040f23448d7a023\": container with ID starting with 7c61d7e9a1736a10071fce1341663e87013341d6df7e632d0040f23448d7a023 not found: ID does not exist" Oct 14 13:29:05 crc kubenswrapper[4725]: I1014 13:29:05.594921 4725 scope.go:117] "RemoveContainer" containerID="15742252cf45237821fbbbf1b74ffcb0433fd2a9bb4fba048f5dd029caf6e826" Oct 14 13:29:05 crc kubenswrapper[4725]: E1014 13:29:05.595251 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15742252cf45237821fbbbf1b74ffcb0433fd2a9bb4fba048f5dd029caf6e826\": container with ID starting with 15742252cf45237821fbbbf1b74ffcb0433fd2a9bb4fba048f5dd029caf6e826 not found: ID does not exist" containerID="15742252cf45237821fbbbf1b74ffcb0433fd2a9bb4fba048f5dd029caf6e826" Oct 14 13:29:05 crc kubenswrapper[4725]: I1014 13:29:05.595329 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15742252cf45237821fbbbf1b74ffcb0433fd2a9bb4fba048f5dd029caf6e826"} err="failed to get container status \"15742252cf45237821fbbbf1b74ffcb0433fd2a9bb4fba048f5dd029caf6e826\": rpc error: code = NotFound desc = could not find container \"15742252cf45237821fbbbf1b74ffcb0433fd2a9bb4fba048f5dd029caf6e826\": container with ID starting with 15742252cf45237821fbbbf1b74ffcb0433fd2a9bb4fba048f5dd029caf6e826 not found: ID does not exist" Oct 14 13:29:05 crc kubenswrapper[4725]: I1014 13:29:05.595392 4725 scope.go:117] "RemoveContainer" containerID="54d4444f072de249679fbfde3cd6b8e5c00a228184e260183c9f79301d1470aa" Oct 14 13:29:05 crc kubenswrapper[4725]: E1014 13:29:05.595859 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54d4444f072de249679fbfde3cd6b8e5c00a228184e260183c9f79301d1470aa\": container with ID starting with 54d4444f072de249679fbfde3cd6b8e5c00a228184e260183c9f79301d1470aa not found: ID does not exist" containerID="54d4444f072de249679fbfde3cd6b8e5c00a228184e260183c9f79301d1470aa" Oct 14 13:29:05 crc kubenswrapper[4725]: I1014 13:29:05.595935 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54d4444f072de249679fbfde3cd6b8e5c00a228184e260183c9f79301d1470aa"} err="failed to get container status \"54d4444f072de249679fbfde3cd6b8e5c00a228184e260183c9f79301d1470aa\": rpc error: code = NotFound desc = could not find container \"54d4444f072de249679fbfde3cd6b8e5c00a228184e260183c9f79301d1470aa\": container with ID starting with 54d4444f072de249679fbfde3cd6b8e5c00a228184e260183c9f79301d1470aa not found: ID does not exist" Oct 14 13:29:05 crc kubenswrapper[4725]: I1014 13:29:05.665588 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tg86\" (UniqueName: \"kubernetes.io/projected/02496c8e-59e0-4efb-82d8-51a4de36d04f-kube-api-access-9tg86\") pod \"02496c8e-59e0-4efb-82d8-51a4de36d04f\" (UID: \"02496c8e-59e0-4efb-82d8-51a4de36d04f\") " Oct 14 13:29:05 crc kubenswrapper[4725]: I1014 13:29:05.665729 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02496c8e-59e0-4efb-82d8-51a4de36d04f-catalog-content\") pod \"02496c8e-59e0-4efb-82d8-51a4de36d04f\" (UID: \"02496c8e-59e0-4efb-82d8-51a4de36d04f\") " Oct 14 13:29:05 crc kubenswrapper[4725]: I1014 13:29:05.665765 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02496c8e-59e0-4efb-82d8-51a4de36d04f-utilities\") pod \"02496c8e-59e0-4efb-82d8-51a4de36d04f\" (UID: \"02496c8e-59e0-4efb-82d8-51a4de36d04f\") " Oct 14 13:29:05 crc kubenswrapper[4725]: I1014 13:29:05.667257 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02496c8e-59e0-4efb-82d8-51a4de36d04f-utilities" (OuterVolumeSpecName: "utilities") pod "02496c8e-59e0-4efb-82d8-51a4de36d04f" (UID: "02496c8e-59e0-4efb-82d8-51a4de36d04f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:29:05 crc kubenswrapper[4725]: I1014 13:29:05.672158 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02496c8e-59e0-4efb-82d8-51a4de36d04f-kube-api-access-9tg86" (OuterVolumeSpecName: "kube-api-access-9tg86") pod "02496c8e-59e0-4efb-82d8-51a4de36d04f" (UID: "02496c8e-59e0-4efb-82d8-51a4de36d04f"). InnerVolumeSpecName "kube-api-access-9tg86". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:29:05 crc kubenswrapper[4725]: I1014 13:29:05.708548 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02496c8e-59e0-4efb-82d8-51a4de36d04f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02496c8e-59e0-4efb-82d8-51a4de36d04f" (UID: "02496c8e-59e0-4efb-82d8-51a4de36d04f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:29:05 crc kubenswrapper[4725]: I1014 13:29:05.767596 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tg86\" (UniqueName: \"kubernetes.io/projected/02496c8e-59e0-4efb-82d8-51a4de36d04f-kube-api-access-9tg86\") on node \"crc\" DevicePath \"\"" Oct 14 13:29:05 crc kubenswrapper[4725]: I1014 13:29:05.767644 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02496c8e-59e0-4efb-82d8-51a4de36d04f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 13:29:05 crc kubenswrapper[4725]: I1014 13:29:05.767657 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02496c8e-59e0-4efb-82d8-51a4de36d04f-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 13:29:05 crc kubenswrapper[4725]: I1014 13:29:05.844927 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f656c"] Oct 14 13:29:05 crc kubenswrapper[4725]: I1014 13:29:05.847977 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f656c"] Oct 14 13:29:05 crc kubenswrapper[4725]: I1014 13:29:05.930907 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02496c8e-59e0-4efb-82d8-51a4de36d04f" path="/var/lib/kubelet/pods/02496c8e-59e0-4efb-82d8-51a4de36d04f/volumes" Oct 14 13:29:05 crc kubenswrapper[4725]: I1014 13:29:05.974002 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-pmr8r" Oct 14 13:29:06 crc kubenswrapper[4725]: I1014 13:29:06.912274 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs"] Oct 14 13:29:06 crc kubenswrapper[4725]: E1014 13:29:06.912547 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02496c8e-59e0-4efb-82d8-51a4de36d04f" containerName="registry-server" Oct 14 13:29:06 crc kubenswrapper[4725]: I1014 13:29:06.912560 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="02496c8e-59e0-4efb-82d8-51a4de36d04f" containerName="registry-server" Oct 14 13:29:06 crc kubenswrapper[4725]: E1014 13:29:06.912577 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02496c8e-59e0-4efb-82d8-51a4de36d04f" containerName="extract-content" Oct 14 13:29:06 crc kubenswrapper[4725]: I1014 13:29:06.912583 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="02496c8e-59e0-4efb-82d8-51a4de36d04f" containerName="extract-content" Oct 14 13:29:06 crc kubenswrapper[4725]: E1014 13:29:06.912595 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02496c8e-59e0-4efb-82d8-51a4de36d04f" containerName="extract-utilities" Oct 14 13:29:06 crc kubenswrapper[4725]: I1014 13:29:06.912601 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="02496c8e-59e0-4efb-82d8-51a4de36d04f" containerName="extract-utilities" Oct 14 13:29:06 crc kubenswrapper[4725]: I1014 13:29:06.912701 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="02496c8e-59e0-4efb-82d8-51a4de36d04f" containerName="registry-server" Oct 14 13:29:06 crc kubenswrapper[4725]: I1014 13:29:06.913483 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs" Oct 14 13:29:06 crc kubenswrapper[4725]: I1014 13:29:06.917152 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-79z4m" Oct 14 13:29:06 crc kubenswrapper[4725]: I1014 13:29:06.924983 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs"] Oct 14 13:29:07 crc kubenswrapper[4725]: I1014 13:29:07.087387 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk8z4\" (UniqueName: \"kubernetes.io/projected/61036e72-6020-469f-9406-d49df28fbd36-kube-api-access-fk8z4\") pod \"98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs\" (UID: \"61036e72-6020-469f-9406-d49df28fbd36\") " pod="openstack-operators/98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs" Oct 14 13:29:07 crc kubenswrapper[4725]: I1014 13:29:07.087484 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61036e72-6020-469f-9406-d49df28fbd36-bundle\") pod \"98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs\" (UID: \"61036e72-6020-469f-9406-d49df28fbd36\") " pod="openstack-operators/98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs" Oct 14 13:29:07 crc kubenswrapper[4725]: I1014 13:29:07.087590 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61036e72-6020-469f-9406-d49df28fbd36-util\") pod \"98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs\" (UID: \"61036e72-6020-469f-9406-d49df28fbd36\") " pod="openstack-operators/98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs" Oct 14 13:29:07 crc kubenswrapper[4725]: I1014 13:29:07.188554 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61036e72-6020-469f-9406-d49df28fbd36-util\") pod \"98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs\" (UID: \"61036e72-6020-469f-9406-d49df28fbd36\") " pod="openstack-operators/98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs" Oct 14 13:29:07 crc kubenswrapper[4725]: I1014 13:29:07.188670 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk8z4\" (UniqueName: \"kubernetes.io/projected/61036e72-6020-469f-9406-d49df28fbd36-kube-api-access-fk8z4\") pod \"98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs\" (UID: \"61036e72-6020-469f-9406-d49df28fbd36\") " pod="openstack-operators/98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs" Oct 14 13:29:07 crc kubenswrapper[4725]: I1014 13:29:07.188708 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61036e72-6020-469f-9406-d49df28fbd36-bundle\") pod \"98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs\" (UID: \"61036e72-6020-469f-9406-d49df28fbd36\") " pod="openstack-operators/98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs" Oct 14 13:29:07 crc kubenswrapper[4725]: I1014 13:29:07.188994 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61036e72-6020-469f-9406-d49df28fbd36-util\") pod \"98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs\" (UID: \"61036e72-6020-469f-9406-d49df28fbd36\") " pod="openstack-operators/98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs" Oct 14 13:29:07 crc kubenswrapper[4725]: I1014 13:29:07.189085 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61036e72-6020-469f-9406-d49df28fbd36-bundle\") pod \"98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs\" (UID: \"61036e72-6020-469f-9406-d49df28fbd36\") " pod="openstack-operators/98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs" Oct 14 13:29:07 crc kubenswrapper[4725]: I1014 13:29:07.204085 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk8z4\" (UniqueName: \"kubernetes.io/projected/61036e72-6020-469f-9406-d49df28fbd36-kube-api-access-fk8z4\") pod \"98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs\" (UID: \"61036e72-6020-469f-9406-d49df28fbd36\") " pod="openstack-operators/98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs" Oct 14 13:29:07 crc kubenswrapper[4725]: I1014 13:29:07.266214 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs" Oct 14 13:29:07 crc kubenswrapper[4725]: I1014 13:29:07.644578 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs"] Oct 14 13:29:07 crc kubenswrapper[4725]: W1014 13:29:07.659856 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61036e72_6020_469f_9406_d49df28fbd36.slice/crio-d21580f32d37e1c5d595e3e475f92ffc7ee0f275ef13c772fcc15081325a738f WatchSource:0}: Error finding container d21580f32d37e1c5d595e3e475f92ffc7ee0f275ef13c772fcc15081325a738f: Status 404 returned error can't find the container with id d21580f32d37e1c5d595e3e475f92ffc7ee0f275ef13c772fcc15081325a738f Oct 14 13:29:08 crc kubenswrapper[4725]: I1014 13:29:08.542096 4725 generic.go:334] "Generic (PLEG): container finished" podID="61036e72-6020-469f-9406-d49df28fbd36" containerID="30eb83f941d50480ccfb2e0d0d62e130324bd639c9e3298c4c16ec8f6b7acb35" exitCode=0 Oct 14 13:29:08 crc kubenswrapper[4725]: I1014 13:29:08.542210 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs" event={"ID":"61036e72-6020-469f-9406-d49df28fbd36","Type":"ContainerDied","Data":"30eb83f941d50480ccfb2e0d0d62e130324bd639c9e3298c4c16ec8f6b7acb35"} Oct 14 13:29:08 crc kubenswrapper[4725]: I1014 13:29:08.542416 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs" event={"ID":"61036e72-6020-469f-9406-d49df28fbd36","Type":"ContainerStarted","Data":"d21580f32d37e1c5d595e3e475f92ffc7ee0f275ef13c772fcc15081325a738f"} Oct 14 13:29:09 crc kubenswrapper[4725]: I1014 13:29:09.551879 4725 generic.go:334] "Generic (PLEG): container finished" podID="61036e72-6020-469f-9406-d49df28fbd36" containerID="168cece75a63d0352e2c32b32e060f0e8e9c34745f492cf4fa4b781c30096196" exitCode=0 Oct 14 13:29:09 crc kubenswrapper[4725]: I1014 13:29:09.551988 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs" event={"ID":"61036e72-6020-469f-9406-d49df28fbd36","Type":"ContainerDied","Data":"168cece75a63d0352e2c32b32e060f0e8e9c34745f492cf4fa4b781c30096196"} Oct 14 13:29:10 crc kubenswrapper[4725]: I1014 13:29:10.561111 4725 generic.go:334] "Generic (PLEG): container finished" podID="61036e72-6020-469f-9406-d49df28fbd36" containerID="47d3c70901d7b9ec515539b285357b2e1932e9c3e47d585098b0e113033f7cfc" exitCode=0 Oct 14 13:29:10 crc kubenswrapper[4725]: I1014 13:29:10.561159 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs" event={"ID":"61036e72-6020-469f-9406-d49df28fbd36","Type":"ContainerDied","Data":"47d3c70901d7b9ec515539b285357b2e1932e9c3e47d585098b0e113033f7cfc"} Oct 14 13:29:11 crc kubenswrapper[4725]: I1014 13:29:11.890192 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs" Oct 14 13:29:12 crc kubenswrapper[4725]: I1014 13:29:12.055319 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61036e72-6020-469f-9406-d49df28fbd36-bundle\") pod \"61036e72-6020-469f-9406-d49df28fbd36\" (UID: \"61036e72-6020-469f-9406-d49df28fbd36\") " Oct 14 13:29:12 crc kubenswrapper[4725]: I1014 13:29:12.055747 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61036e72-6020-469f-9406-d49df28fbd36-util\") pod \"61036e72-6020-469f-9406-d49df28fbd36\" (UID: \"61036e72-6020-469f-9406-d49df28fbd36\") " Oct 14 13:29:12 crc kubenswrapper[4725]: I1014 13:29:12.055779 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk8z4\" (UniqueName: \"kubernetes.io/projected/61036e72-6020-469f-9406-d49df28fbd36-kube-api-access-fk8z4\") pod \"61036e72-6020-469f-9406-d49df28fbd36\" (UID: \"61036e72-6020-469f-9406-d49df28fbd36\") " Oct 14 13:29:12 crc kubenswrapper[4725]: I1014 13:29:12.056280 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61036e72-6020-469f-9406-d49df28fbd36-bundle" (OuterVolumeSpecName: "bundle") pod "61036e72-6020-469f-9406-d49df28fbd36" (UID: "61036e72-6020-469f-9406-d49df28fbd36"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:29:12 crc kubenswrapper[4725]: I1014 13:29:12.064044 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61036e72-6020-469f-9406-d49df28fbd36-kube-api-access-fk8z4" (OuterVolumeSpecName: "kube-api-access-fk8z4") pod "61036e72-6020-469f-9406-d49df28fbd36" (UID: "61036e72-6020-469f-9406-d49df28fbd36"). InnerVolumeSpecName "kube-api-access-fk8z4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:29:12 crc kubenswrapper[4725]: I1014 13:29:12.070625 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61036e72-6020-469f-9406-d49df28fbd36-util" (OuterVolumeSpecName: "util") pod "61036e72-6020-469f-9406-d49df28fbd36" (UID: "61036e72-6020-469f-9406-d49df28fbd36"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:29:12 crc kubenswrapper[4725]: I1014 13:29:12.157414 4725 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61036e72-6020-469f-9406-d49df28fbd36-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:29:12 crc kubenswrapper[4725]: I1014 13:29:12.157495 4725 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61036e72-6020-469f-9406-d49df28fbd36-util\") on node \"crc\" DevicePath \"\"" Oct 14 13:29:12 crc kubenswrapper[4725]: I1014 13:29:12.157509 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fk8z4\" (UniqueName: \"kubernetes.io/projected/61036e72-6020-469f-9406-d49df28fbd36-kube-api-access-fk8z4\") on node \"crc\" DevicePath \"\"" Oct 14 13:29:12 crc kubenswrapper[4725]: I1014 13:29:12.578881 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs" event={"ID":"61036e72-6020-469f-9406-d49df28fbd36","Type":"ContainerDied","Data":"d21580f32d37e1c5d595e3e475f92ffc7ee0f275ef13c772fcc15081325a738f"} Oct 14 13:29:12 crc kubenswrapper[4725]: I1014 13:29:12.578931 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d21580f32d37e1c5d595e3e475f92ffc7ee0f275ef13c772fcc15081325a738f" Oct 14 13:29:12 crc kubenswrapper[4725]: I1014 13:29:12.578978 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs" Oct 14 13:29:16 crc kubenswrapper[4725]: I1014 13:29:16.270030 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-577669444d-64dvz"] Oct 14 13:29:16 crc kubenswrapper[4725]: E1014 13:29:16.270732 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61036e72-6020-469f-9406-d49df28fbd36" containerName="pull" Oct 14 13:29:16 crc kubenswrapper[4725]: I1014 13:29:16.270746 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="61036e72-6020-469f-9406-d49df28fbd36" containerName="pull" Oct 14 13:29:16 crc kubenswrapper[4725]: E1014 13:29:16.270765 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61036e72-6020-469f-9406-d49df28fbd36" containerName="util" Oct 14 13:29:16 crc kubenswrapper[4725]: I1014 13:29:16.270778 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="61036e72-6020-469f-9406-d49df28fbd36" containerName="util" Oct 14 13:29:16 crc kubenswrapper[4725]: E1014 13:29:16.270796 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61036e72-6020-469f-9406-d49df28fbd36" containerName="extract" Oct 14 13:29:16 crc kubenswrapper[4725]: I1014 13:29:16.270802 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="61036e72-6020-469f-9406-d49df28fbd36" containerName="extract" Oct 14 13:29:16 crc kubenswrapper[4725]: I1014 13:29:16.270914 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="61036e72-6020-469f-9406-d49df28fbd36" containerName="extract" Oct 14 13:29:16 crc kubenswrapper[4725]: I1014 13:29:16.271516 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-577669444d-64dvz" Oct 14 13:29:16 crc kubenswrapper[4725]: I1014 13:29:16.273664 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-2bfvb" Oct 14 13:29:16 crc kubenswrapper[4725]: I1014 13:29:16.293557 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-577669444d-64dvz"] Oct 14 13:29:16 crc kubenswrapper[4725]: I1014 13:29:16.414160 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zkcr\" (UniqueName: \"kubernetes.io/projected/859cc4b3-36d4-43aa-8780-12ffdfef67e1-kube-api-access-9zkcr\") pod \"openstack-operator-controller-operator-577669444d-64dvz\" (UID: \"859cc4b3-36d4-43aa-8780-12ffdfef67e1\") " pod="openstack-operators/openstack-operator-controller-operator-577669444d-64dvz" Oct 14 13:29:16 crc kubenswrapper[4725]: I1014 13:29:16.515208 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zkcr\" (UniqueName: \"kubernetes.io/projected/859cc4b3-36d4-43aa-8780-12ffdfef67e1-kube-api-access-9zkcr\") pod \"openstack-operator-controller-operator-577669444d-64dvz\" (UID: \"859cc4b3-36d4-43aa-8780-12ffdfef67e1\") " pod="openstack-operators/openstack-operator-controller-operator-577669444d-64dvz" Oct 14 13:29:16 crc kubenswrapper[4725]: I1014 13:29:16.540859 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zkcr\" (UniqueName: \"kubernetes.io/projected/859cc4b3-36d4-43aa-8780-12ffdfef67e1-kube-api-access-9zkcr\") pod \"openstack-operator-controller-operator-577669444d-64dvz\" (UID: \"859cc4b3-36d4-43aa-8780-12ffdfef67e1\") " pod="openstack-operators/openstack-operator-controller-operator-577669444d-64dvz" Oct 14 13:29:16 crc kubenswrapper[4725]: I1014 13:29:16.589034 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-577669444d-64dvz" Oct 14 13:29:17 crc kubenswrapper[4725]: I1014 13:29:17.057734 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-577669444d-64dvz"] Oct 14 13:29:17 crc kubenswrapper[4725]: I1014 13:29:17.619011 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-577669444d-64dvz" event={"ID":"859cc4b3-36d4-43aa-8780-12ffdfef67e1","Type":"ContainerStarted","Data":"acc5606466ec2f347d34076e34f68e073f3a1be39b3af650c6148afe31b93191"} Oct 14 13:29:20 crc kubenswrapper[4725]: I1014 13:29:20.639852 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-577669444d-64dvz" event={"ID":"859cc4b3-36d4-43aa-8780-12ffdfef67e1","Type":"ContainerStarted","Data":"0832f1f6e075383669b53ef95e22c2470212d17c28b598be5683c2da8d16d323"} Oct 14 13:29:23 crc kubenswrapper[4725]: I1014 13:29:23.660366 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-577669444d-64dvz" event={"ID":"859cc4b3-36d4-43aa-8780-12ffdfef67e1","Type":"ContainerStarted","Data":"b5e7ec4d016b00b450686b2d2ff1376307c2d8d47445359bc2ebcc2f8ad326cd"} Oct 14 13:29:23 crc kubenswrapper[4725]: I1014 13:29:23.660993 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-577669444d-64dvz" Oct 14 13:29:23 crc kubenswrapper[4725]: I1014 13:29:23.711347 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-577669444d-64dvz" podStartSLOduration=2.220417543 podStartE2EDuration="7.711321416s" podCreationTimestamp="2025-10-14 13:29:16 +0000 UTC" firstStartedPulling="2025-10-14 13:29:17.056372452 +0000 UTC m=+873.904807261" lastFinishedPulling="2025-10-14 13:29:22.547276325 +0000 UTC m=+879.395711134" observedRunningTime="2025-10-14 13:29:23.704606742 +0000 UTC m=+880.553041601" watchObservedRunningTime="2025-10-14 13:29:23.711321416 +0000 UTC m=+880.559756265" Oct 14 13:29:26 crc kubenswrapper[4725]: I1014 13:29:26.592724 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-577669444d-64dvz" Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.771395 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-59cdc64769-hbgnj"] Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.773716 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-hbgnj" Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.778042 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-64f84fcdbb-wnx2r"] Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.779377 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-wnx2r" Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.783069 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-8pxhw" Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.783069 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-4nq66" Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.784509 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-59cdc64769-hbgnj"] Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.790567 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-64f84fcdbb-wnx2r"] Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.814575 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-7bb46cd7d-pjkj4"] Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.815590 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-pjkj4" Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.820765 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-687df44cdb-h8bkv"] Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.821249 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-qmb45" Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.821929 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-h8bkv" Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.825836 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-5m6qj" Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.838519 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-6d9967f8dd-w9x95"] Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.840001 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-w9x95" Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.843205 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-687df44cdb-h8bkv"] Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.850548 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7bb46cd7d-pjkj4"] Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.851380 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-98bbn" Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.854777 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-6d9967f8dd-w9x95"] Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.894672 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d74794d9b-96vkx"] Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.897132 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-96vkx" Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.903065 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-mbq7p" Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.905332 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jxnb\" (UniqueName: \"kubernetes.io/projected/f8ae62f1-e80d-4f8c-81c3-0c5c50338046-kube-api-access-4jxnb\") pod \"cinder-operator-controller-manager-59cdc64769-hbgnj\" (UID: \"f8ae62f1-e80d-4f8c-81c3-0c5c50338046\") " pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-hbgnj" Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.905392 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7nhg\" (UniqueName: \"kubernetes.io/projected/ea9bba90-8416-4b43-a2be-d41f635db481-kube-api-access-v7nhg\") pod \"barbican-operator-controller-manager-64f84fcdbb-wnx2r\" (UID: \"ea9bba90-8416-4b43-a2be-d41f635db481\") " pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-wnx2r" Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.929615 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-585fc5b659-jgbh4"] Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.930994 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-jgbh4" Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.936910 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-ltgfl" Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.937104 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.954962 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d74794d9b-96vkx"] Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.961465 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-585fc5b659-jgbh4"] Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.978557 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-74cb5cbc49-tt2xm"] Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.980054 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-tt2xm" Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.984760 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-hvtjx" Oct 14 13:29:40 crc kubenswrapper[4725]: I1014 13:29:40.986418 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-74cb5cbc49-tt2xm"] Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.003199 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-ddb98f99b-8jwrn"] Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.004170 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-8jwrn" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.005657 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-59578bc799-4jwx9"] Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.005873 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-lgdpw" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.006613 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-59578bc799-4jwx9" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.007753 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jxnb\" (UniqueName: \"kubernetes.io/projected/f8ae62f1-e80d-4f8c-81c3-0c5c50338046-kube-api-access-4jxnb\") pod \"cinder-operator-controller-manager-59cdc64769-hbgnj\" (UID: \"f8ae62f1-e80d-4f8c-81c3-0c5c50338046\") " pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-hbgnj" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.007778 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7nhg\" (UniqueName: \"kubernetes.io/projected/ea9bba90-8416-4b43-a2be-d41f635db481-kube-api-access-v7nhg\") pod \"barbican-operator-controller-manager-64f84fcdbb-wnx2r\" (UID: \"ea9bba90-8416-4b43-a2be-d41f635db481\") " pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-wnx2r" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.007830 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prqmc\" (UniqueName: \"kubernetes.io/projected/40be9ede-1b5e-4022-9626-8367074f88c1-kube-api-access-prqmc\") pod \"designate-operator-controller-manager-687df44cdb-h8bkv\" (UID: \"40be9ede-1b5e-4022-9626-8367074f88c1\") " pod="openstack-operators/designate-operator-controller-manager-687df44cdb-h8bkv" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.007849 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98hjk\" (UniqueName: \"kubernetes.io/projected/8dc80d7a-d38e-4baa-85ae-fa856c39b48f-kube-api-access-98hjk\") pod \"heat-operator-controller-manager-6d9967f8dd-w9x95\" (UID: \"8dc80d7a-d38e-4baa-85ae-fa856c39b48f\") " pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-w9x95" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.007888 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74mcg\" (UniqueName: \"kubernetes.io/projected/f0b42b9f-7713-48ca-b148-c42b5d2006f3-kube-api-access-74mcg\") pod \"horizon-operator-controller-manager-6d74794d9b-96vkx\" (UID: \"f0b42b9f-7713-48ca-b148-c42b5d2006f3\") " pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-96vkx" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.007915 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgmvq\" (UniqueName: \"kubernetes.io/projected/da9db202-78ec-4df7-9ead-374e287391a2-kube-api-access-zgmvq\") pod \"glance-operator-controller-manager-7bb46cd7d-pjkj4\" (UID: \"da9db202-78ec-4df7-9ead-374e287391a2\") " pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-pjkj4" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.011130 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-h4wzg" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.016704 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-5777b4f897-9q25r"] Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.018193 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-9q25r" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.021493 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-bhjxh" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.033198 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-ddb98f99b-8jwrn"] Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.035658 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jxnb\" (UniqueName: \"kubernetes.io/projected/f8ae62f1-e80d-4f8c-81c3-0c5c50338046-kube-api-access-4jxnb\") pod \"cinder-operator-controller-manager-59cdc64769-hbgnj\" (UID: \"f8ae62f1-e80d-4f8c-81c3-0c5c50338046\") " pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-hbgnj" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.039189 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7nhg\" (UniqueName: \"kubernetes.io/projected/ea9bba90-8416-4b43-a2be-d41f635db481-kube-api-access-v7nhg\") pod \"barbican-operator-controller-manager-64f84fcdbb-wnx2r\" (UID: \"ea9bba90-8416-4b43-a2be-d41f635db481\") " pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-wnx2r" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.046560 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-59578bc799-4jwx9"] Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.051781 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-5777b4f897-9q25r"] Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.071876 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-797d478b46-55m5j"] Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.075969 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-55m5j" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.083132 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-57bb74c7bf-mvqp4"] Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.084231 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-mvqp4" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.090098 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-797d478b46-55m5j"] Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.091050 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-pw7qs" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.091231 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-qnvw4" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.094916 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-d258r"] Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.096127 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-d258r" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.099972 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-8dt67" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.100040 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-hbgnj" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.104534 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-57bb74c7bf-mvqp4"] Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.110708 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8qfv\" (UniqueName: \"kubernetes.io/projected/0ba97bf9-2b5b-495b-99e2-328a987535e1-kube-api-access-s8qfv\") pod \"manila-operator-controller-manager-59578bc799-4jwx9\" (UID: \"0ba97bf9-2b5b-495b-99e2-328a987535e1\") " pod="openstack-operators/manila-operator-controller-manager-59578bc799-4jwx9" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.110784 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prqmc\" (UniqueName: \"kubernetes.io/projected/40be9ede-1b5e-4022-9626-8367074f88c1-kube-api-access-prqmc\") pod \"designate-operator-controller-manager-687df44cdb-h8bkv\" (UID: \"40be9ede-1b5e-4022-9626-8367074f88c1\") " pod="openstack-operators/designate-operator-controller-manager-687df44cdb-h8bkv" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.110809 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98hjk\" (UniqueName: \"kubernetes.io/projected/8dc80d7a-d38e-4baa-85ae-fa856c39b48f-kube-api-access-98hjk\") pod \"heat-operator-controller-manager-6d9967f8dd-w9x95\" (UID: \"8dc80d7a-d38e-4baa-85ae-fa856c39b48f\") " pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-w9x95" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.111067 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-wnx2r" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.113002 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74mcg\" (UniqueName: \"kubernetes.io/projected/f0b42b9f-7713-48ca-b148-c42b5d2006f3-kube-api-access-74mcg\") pod \"horizon-operator-controller-manager-6d74794d9b-96vkx\" (UID: \"f0b42b9f-7713-48ca-b148-c42b5d2006f3\") " pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-96vkx" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.113059 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgmvq\" (UniqueName: \"kubernetes.io/projected/da9db202-78ec-4df7-9ead-374e287391a2-kube-api-access-zgmvq\") pod \"glance-operator-controller-manager-7bb46cd7d-pjkj4\" (UID: \"da9db202-78ec-4df7-9ead-374e287391a2\") " pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-pjkj4" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.113083 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58lrq\" (UniqueName: \"kubernetes.io/projected/3316f5a0-820b-45ec-802a-dc3203f1d9fa-kube-api-access-58lrq\") pod \"infra-operator-controller-manager-585fc5b659-jgbh4\" (UID: \"3316f5a0-820b-45ec-802a-dc3203f1d9fa\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-jgbh4" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.113142 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3316f5a0-820b-45ec-802a-dc3203f1d9fa-cert\") pod \"infra-operator-controller-manager-585fc5b659-jgbh4\" (UID: \"3316f5a0-820b-45ec-802a-dc3203f1d9fa\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-jgbh4" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.113202 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28qz4\" (UniqueName: \"kubernetes.io/projected/95db1b3c-9877-4e8f-b756-532ccfd5db7a-kube-api-access-28qz4\") pod \"keystone-operator-controller-manager-ddb98f99b-8jwrn\" (UID: \"95db1b3c-9877-4e8f-b756-532ccfd5db7a\") " pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-8jwrn" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.113329 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjzrd\" (UniqueName: \"kubernetes.io/projected/64f86a1d-32a0-4133-94c3-b59ab14a0d4e-kube-api-access-wjzrd\") pod \"ironic-operator-controller-manager-74cb5cbc49-tt2xm\" (UID: \"64f86a1d-32a0-4133-94c3-b59ab14a0d4e\") " pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-tt2xm" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.113442 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-d258r"] Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.132132 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d5s454"] Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.133094 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d5s454" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.134765 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.134650 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prqmc\" (UniqueName: \"kubernetes.io/projected/40be9ede-1b5e-4022-9626-8367074f88c1-kube-api-access-prqmc\") pod \"designate-operator-controller-manager-687df44cdb-h8bkv\" (UID: \"40be9ede-1b5e-4022-9626-8367074f88c1\") " pod="openstack-operators/designate-operator-controller-manager-687df44cdb-h8bkv" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.135351 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgmvq\" (UniqueName: \"kubernetes.io/projected/da9db202-78ec-4df7-9ead-374e287391a2-kube-api-access-zgmvq\") pod \"glance-operator-controller-manager-7bb46cd7d-pjkj4\" (UID: \"da9db202-78ec-4df7-9ead-374e287391a2\") " pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-pjkj4" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.136790 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98hjk\" (UniqueName: \"kubernetes.io/projected/8dc80d7a-d38e-4baa-85ae-fa856c39b48f-kube-api-access-98hjk\") pod \"heat-operator-controller-manager-6d9967f8dd-w9x95\" (UID: \"8dc80d7a-d38e-4baa-85ae-fa856c39b48f\") " pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-w9x95" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.137596 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-pjkj4" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.138840 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74mcg\" (UniqueName: \"kubernetes.io/projected/f0b42b9f-7713-48ca-b148-c42b5d2006f3-kube-api-access-74mcg\") pod \"horizon-operator-controller-manager-6d74794d9b-96vkx\" (UID: \"f0b42b9f-7713-48ca-b148-c42b5d2006f3\") " pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-96vkx" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.146090 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-9dqb2" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.153254 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-h8bkv" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.173612 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d5s454"] Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.182904 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-869cc7797f-fnf6m"] Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.185797 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-fnf6m" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.187232 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-lfmhr" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.190184 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-869cc7797f-fnf6m"] Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.196920 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-w9x95" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.203404 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-664664cb68-b462h"] Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.205183 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-664664cb68-b462h" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.208409 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-5k482" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.211007 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-vmdhg"] Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.213971 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58lrq\" (UniqueName: \"kubernetes.io/projected/3316f5a0-820b-45ec-802a-dc3203f1d9fa-kube-api-access-58lrq\") pod \"infra-operator-controller-manager-585fc5b659-jgbh4\" (UID: \"3316f5a0-820b-45ec-802a-dc3203f1d9fa\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-jgbh4" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.214010 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3316f5a0-820b-45ec-802a-dc3203f1d9fa-cert\") pod \"infra-operator-controller-manager-585fc5b659-jgbh4\" (UID: \"3316f5a0-820b-45ec-802a-dc3203f1d9fa\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-jgbh4" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.214042 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tprtz\" (UniqueName: \"kubernetes.io/projected/960fbcd9-1563-4f84-89ac-53694a2413de-kube-api-access-tprtz\") pod \"mariadb-operator-controller-manager-5777b4f897-9q25r\" (UID: \"960fbcd9-1563-4f84-89ac-53694a2413de\") " pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-9q25r" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.214064 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28qz4\" (UniqueName: \"kubernetes.io/projected/95db1b3c-9877-4e8f-b756-532ccfd5db7a-kube-api-access-28qz4\") pod \"keystone-operator-controller-manager-ddb98f99b-8jwrn\" (UID: \"95db1b3c-9877-4e8f-b756-532ccfd5db7a\") " pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-8jwrn" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.214089 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj65c\" (UniqueName: \"kubernetes.io/projected/720eb902-cb6c-4d90-b25b-eae97e6d055d-kube-api-access-sj65c\") pod \"nova-operator-controller-manager-57bb74c7bf-mvqp4\" (UID: \"720eb902-cb6c-4d90-b25b-eae97e6d055d\") " pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-mvqp4" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.214110 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjzrd\" (UniqueName: \"kubernetes.io/projected/64f86a1d-32a0-4133-94c3-b59ab14a0d4e-kube-api-access-wjzrd\") pod \"ironic-operator-controller-manager-74cb5cbc49-tt2xm\" (UID: \"64f86a1d-32a0-4133-94c3-b59ab14a0d4e\") " pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-tt2xm" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.214152 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8qfv\" (UniqueName: \"kubernetes.io/projected/0ba97bf9-2b5b-495b-99e2-328a987535e1-kube-api-access-s8qfv\") pod \"manila-operator-controller-manager-59578bc799-4jwx9\" (UID: \"0ba97bf9-2b5b-495b-99e2-328a987535e1\") " pod="openstack-operators/manila-operator-controller-manager-59578bc799-4jwx9" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.214218 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rcbh\" (UniqueName: \"kubernetes.io/projected/15e9c123-7b58-49ab-b7d7-f429e6b15c1e-kube-api-access-8rcbh\") pod \"neutron-operator-controller-manager-797d478b46-55m5j\" (UID: \"15e9c123-7b58-49ab-b7d7-f429e6b15c1e\") " pod="openstack-operators/neutron-operator-controller-manager-797d478b46-55m5j" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.214252 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-vmdhg" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.214252 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxhkb\" (UniqueName: \"kubernetes.io/projected/e4436f5f-2c38-49ee-8e53-061e294f009f-kube-api-access-nxhkb\") pod \"octavia-operator-controller-manager-6d7c7ddf95-d258r\" (UID: \"e4436f5f-2c38-49ee-8e53-061e294f009f\") " pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-d258r" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.217712 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3316f5a0-820b-45ec-802a-dc3203f1d9fa-cert\") pod \"infra-operator-controller-manager-585fc5b659-jgbh4\" (UID: \"3316f5a0-820b-45ec-802a-dc3203f1d9fa\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-jgbh4" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.221950 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-nh8xq" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.237216 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjzrd\" (UniqueName: \"kubernetes.io/projected/64f86a1d-32a0-4133-94c3-b59ab14a0d4e-kube-api-access-wjzrd\") pod \"ironic-operator-controller-manager-74cb5cbc49-tt2xm\" (UID: \"64f86a1d-32a0-4133-94c3-b59ab14a0d4e\") " pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-tt2xm" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.237274 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-664664cb68-b462h"] Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.243099 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58lrq\" (UniqueName: \"kubernetes.io/projected/3316f5a0-820b-45ec-802a-dc3203f1d9fa-kube-api-access-58lrq\") pod \"infra-operator-controller-manager-585fc5b659-jgbh4\" (UID: \"3316f5a0-820b-45ec-802a-dc3203f1d9fa\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-jgbh4" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.243576 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28qz4\" (UniqueName: \"kubernetes.io/projected/95db1b3c-9877-4e8f-b756-532ccfd5db7a-kube-api-access-28qz4\") pod \"keystone-operator-controller-manager-ddb98f99b-8jwrn\" (UID: \"95db1b3c-9877-4e8f-b756-532ccfd5db7a\") " pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-8jwrn" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.243719 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-96vkx" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.244157 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8qfv\" (UniqueName: \"kubernetes.io/projected/0ba97bf9-2b5b-495b-99e2-328a987535e1-kube-api-access-s8qfv\") pod \"manila-operator-controller-manager-59578bc799-4jwx9\" (UID: \"0ba97bf9-2b5b-495b-99e2-328a987535e1\") " pod="openstack-operators/manila-operator-controller-manager-59578bc799-4jwx9" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.256579 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-jgbh4" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.265567 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-vmdhg"] Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.304739 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-tt2xm" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.317752 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tprtz\" (UniqueName: \"kubernetes.io/projected/960fbcd9-1563-4f84-89ac-53694a2413de-kube-api-access-tprtz\") pod \"mariadb-operator-controller-manager-5777b4f897-9q25r\" (UID: \"960fbcd9-1563-4f84-89ac-53694a2413de\") " pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-9q25r" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.317821 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj65c\" (UniqueName: \"kubernetes.io/projected/720eb902-cb6c-4d90-b25b-eae97e6d055d-kube-api-access-sj65c\") pod \"nova-operator-controller-manager-57bb74c7bf-mvqp4\" (UID: \"720eb902-cb6c-4d90-b25b-eae97e6d055d\") " pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-mvqp4" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.317858 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9d9f\" (UniqueName: \"kubernetes.io/projected/28bb98a9-5146-48a5-8f4f-a3a7766ab18c-kube-api-access-c9d9f\") pod \"ovn-operator-controller-manager-869cc7797f-fnf6m\" (UID: \"28bb98a9-5146-48a5-8f4f-a3a7766ab18c\") " pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-fnf6m" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.317896 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n68rm\" (UniqueName: \"kubernetes.io/projected/1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec-kube-api-access-n68rm\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757d5s454\" (UID: \"1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d5s454" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.318364 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgphz\" (UniqueName: \"kubernetes.io/projected/3d451d86-1531-4612-a2c8-5be10e09f890-kube-api-access-hgphz\") pod \"swift-operator-controller-manager-5f4d5dfdc6-vmdhg\" (UID: \"3d451d86-1531-4612-a2c8-5be10e09f890\") " pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-vmdhg" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.318405 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec-cert\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757d5s454\" (UID: \"1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d5s454" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.318486 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rcbh\" (UniqueName: \"kubernetes.io/projected/15e9c123-7b58-49ab-b7d7-f429e6b15c1e-kube-api-access-8rcbh\") pod \"neutron-operator-controller-manager-797d478b46-55m5j\" (UID: \"15e9c123-7b58-49ab-b7d7-f429e6b15c1e\") " pod="openstack-operators/neutron-operator-controller-manager-797d478b46-55m5j" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.318535 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn8db\" (UniqueName: \"kubernetes.io/projected/420f32f0-6ea7-4489-8165-215def02acf1-kube-api-access-wn8db\") pod \"placement-operator-controller-manager-664664cb68-b462h\" (UID: \"420f32f0-6ea7-4489-8165-215def02acf1\") " pod="openstack-operators/placement-operator-controller-manager-664664cb68-b462h" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.318565 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxhkb\" (UniqueName: \"kubernetes.io/projected/e4436f5f-2c38-49ee-8e53-061e294f009f-kube-api-access-nxhkb\") pod \"octavia-operator-controller-manager-6d7c7ddf95-d258r\" (UID: \"e4436f5f-2c38-49ee-8e53-061e294f009f\") " pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-d258r" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.328040 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-8jwrn" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.339189 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-59578bc799-4jwx9" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.340125 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxhkb\" (UniqueName: \"kubernetes.io/projected/e4436f5f-2c38-49ee-8e53-061e294f009f-kube-api-access-nxhkb\") pod \"octavia-operator-controller-manager-6d7c7ddf95-d258r\" (UID: \"e4436f5f-2c38-49ee-8e53-061e294f009f\") " pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-d258r" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.349431 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rcbh\" (UniqueName: \"kubernetes.io/projected/15e9c123-7b58-49ab-b7d7-f429e6b15c1e-kube-api-access-8rcbh\") pod \"neutron-operator-controller-manager-797d478b46-55m5j\" (UID: \"15e9c123-7b58-49ab-b7d7-f429e6b15c1e\") " pod="openstack-operators/neutron-operator-controller-manager-797d478b46-55m5j" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.350374 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tprtz\" (UniqueName: \"kubernetes.io/projected/960fbcd9-1563-4f84-89ac-53694a2413de-kube-api-access-tprtz\") pod \"mariadb-operator-controller-manager-5777b4f897-9q25r\" (UID: \"960fbcd9-1563-4f84-89ac-53694a2413de\") " pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-9q25r" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.354229 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj65c\" (UniqueName: \"kubernetes.io/projected/720eb902-cb6c-4d90-b25b-eae97e6d055d-kube-api-access-sj65c\") pod \"nova-operator-controller-manager-57bb74c7bf-mvqp4\" (UID: \"720eb902-cb6c-4d90-b25b-eae97e6d055d\") " pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-mvqp4" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.374572 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-578874c84d-878xx"] Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.376483 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-878xx" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.376995 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-9q25r" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.381264 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-qjrdr" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.401687 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-578874c84d-878xx"] Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.404258 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-55m5j" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.420533 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9d9f\" (UniqueName: \"kubernetes.io/projected/28bb98a9-5146-48a5-8f4f-a3a7766ab18c-kube-api-access-c9d9f\") pod \"ovn-operator-controller-manager-869cc7797f-fnf6m\" (UID: \"28bb98a9-5146-48a5-8f4f-a3a7766ab18c\") " pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-fnf6m" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.420599 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n68rm\" (UniqueName: \"kubernetes.io/projected/1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec-kube-api-access-n68rm\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757d5s454\" (UID: \"1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d5s454" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.420620 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgphz\" (UniqueName: \"kubernetes.io/projected/3d451d86-1531-4612-a2c8-5be10e09f890-kube-api-access-hgphz\") pod \"swift-operator-controller-manager-5f4d5dfdc6-vmdhg\" (UID: \"3d451d86-1531-4612-a2c8-5be10e09f890\") " pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-vmdhg" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.420643 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec-cert\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757d5s454\" (UID: \"1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d5s454" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.420674 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr7w4\" (UniqueName: \"kubernetes.io/projected/fb53b295-5121-4b18-9d3d-e3e981a64f2d-kube-api-access-nr7w4\") pod \"telemetry-operator-controller-manager-578874c84d-878xx\" (UID: \"fb53b295-5121-4b18-9d3d-e3e981a64f2d\") " pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-878xx" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.420696 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn8db\" (UniqueName: \"kubernetes.io/projected/420f32f0-6ea7-4489-8165-215def02acf1-kube-api-access-wn8db\") pod \"placement-operator-controller-manager-664664cb68-b462h\" (UID: \"420f32f0-6ea7-4489-8165-215def02acf1\") " pod="openstack-operators/placement-operator-controller-manager-664664cb68-b462h" Oct 14 13:29:41 crc kubenswrapper[4725]: E1014 13:29:41.421032 4725 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 14 13:29:41 crc kubenswrapper[4725]: E1014 13:29:41.421074 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec-cert podName:1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec nodeName:}" failed. No retries permitted until 2025-10-14 13:29:41.921057593 +0000 UTC m=+898.769492402 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec-cert") pod "openstack-baremetal-operator-controller-manager-6cc7fb757d5s454" (UID: "1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec") : secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.435316 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-mvqp4" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.449252 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgphz\" (UniqueName: \"kubernetes.io/projected/3d451d86-1531-4612-a2c8-5be10e09f890-kube-api-access-hgphz\") pod \"swift-operator-controller-manager-5f4d5dfdc6-vmdhg\" (UID: \"3d451d86-1531-4612-a2c8-5be10e09f890\") " pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-vmdhg" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.458405 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn8db\" (UniqueName: \"kubernetes.io/projected/420f32f0-6ea7-4489-8165-215def02acf1-kube-api-access-wn8db\") pod \"placement-operator-controller-manager-664664cb68-b462h\" (UID: \"420f32f0-6ea7-4489-8165-215def02acf1\") " pod="openstack-operators/placement-operator-controller-manager-664664cb68-b462h" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.458438 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9d9f\" (UniqueName: \"kubernetes.io/projected/28bb98a9-5146-48a5-8f4f-a3a7766ab18c-kube-api-access-c9d9f\") pod \"ovn-operator-controller-manager-869cc7797f-fnf6m\" (UID: \"28bb98a9-5146-48a5-8f4f-a3a7766ab18c\") " pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-fnf6m" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.460047 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n68rm\" (UniqueName: \"kubernetes.io/projected/1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec-kube-api-access-n68rm\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757d5s454\" (UID: \"1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d5s454" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.474268 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-ffcdd6c94-nhdz7"] Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.475444 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-nhdz7" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.477872 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-kbh5q" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.488625 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-ffcdd6c94-nhdz7"] Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.492330 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-d258r" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.505760 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-646675d848-l76vk"] Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.507278 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-646675d848-l76vk" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.510879 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-ck26n" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.513442 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-646675d848-l76vk"] Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.513990 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-fnf6m" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.527211 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr7w4\" (UniqueName: \"kubernetes.io/projected/fb53b295-5121-4b18-9d3d-e3e981a64f2d-kube-api-access-nr7w4\") pod \"telemetry-operator-controller-manager-578874c84d-878xx\" (UID: \"fb53b295-5121-4b18-9d3d-e3e981a64f2d\") " pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-878xx" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.535302 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-664664cb68-b462h" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.548241 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-vmdhg" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.550924 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7cff5c958-wscp6"] Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.552253 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7cff5c958-wscp6" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.558357 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.563159 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-sj4j6" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.589656 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7cff5c958-wscp6"] Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.600086 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr7w4\" (UniqueName: \"kubernetes.io/projected/fb53b295-5121-4b18-9d3d-e3e981a64f2d-kube-api-access-nr7w4\") pod \"telemetry-operator-controller-manager-578874c84d-878xx\" (UID: \"fb53b295-5121-4b18-9d3d-e3e981a64f2d\") " pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-878xx" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.601096 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-tsxsk"] Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.601915 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-tsxsk" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.608648 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-tmwtq" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.612917 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-tsxsk"] Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.616219 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-878xx" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.630651 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjf8h\" (UniqueName: \"kubernetes.io/projected/3a364ec5-2c66-4d8d-8f51-b3b4357e7b67-kube-api-access-rjf8h\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-tsxsk\" (UID: \"3a364ec5-2c66-4d8d-8f51-b3b4357e7b67\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-tsxsk" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.630783 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6r5q\" (UniqueName: \"kubernetes.io/projected/50605e76-3c4b-480c-9a76-e84962f38851-kube-api-access-q6r5q\") pod \"watcher-operator-controller-manager-646675d848-l76vk\" (UID: \"50605e76-3c4b-480c-9a76-e84962f38851\") " pod="openstack-operators/watcher-operator-controller-manager-646675d848-l76vk" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.630893 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-229z2\" (UniqueName: \"kubernetes.io/projected/cb31097e-d603-4f4d-8cc1-a5f1a841ea00-kube-api-access-229z2\") pod \"test-operator-controller-manager-ffcdd6c94-nhdz7\" (UID: \"cb31097e-d603-4f4d-8cc1-a5f1a841ea00\") " pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-nhdz7" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.630953 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2627f8df-e54f-45e7-862a-fcacad250f2a-cert\") pod \"openstack-operator-controller-manager-7cff5c958-wscp6\" (UID: \"2627f8df-e54f-45e7-862a-fcacad250f2a\") " pod="openstack-operators/openstack-operator-controller-manager-7cff5c958-wscp6" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.631000 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbzzk\" (UniqueName: \"kubernetes.io/projected/2627f8df-e54f-45e7-862a-fcacad250f2a-kube-api-access-gbzzk\") pod \"openstack-operator-controller-manager-7cff5c958-wscp6\" (UID: \"2627f8df-e54f-45e7-862a-fcacad250f2a\") " pod="openstack-operators/openstack-operator-controller-manager-7cff5c958-wscp6" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.733608 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-229z2\" (UniqueName: \"kubernetes.io/projected/cb31097e-d603-4f4d-8cc1-a5f1a841ea00-kube-api-access-229z2\") pod \"test-operator-controller-manager-ffcdd6c94-nhdz7\" (UID: \"cb31097e-d603-4f4d-8cc1-a5f1a841ea00\") " pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-nhdz7" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.733669 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2627f8df-e54f-45e7-862a-fcacad250f2a-cert\") pod \"openstack-operator-controller-manager-7cff5c958-wscp6\" (UID: \"2627f8df-e54f-45e7-862a-fcacad250f2a\") " pod="openstack-operators/openstack-operator-controller-manager-7cff5c958-wscp6" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.733709 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbzzk\" (UniqueName: \"kubernetes.io/projected/2627f8df-e54f-45e7-862a-fcacad250f2a-kube-api-access-gbzzk\") pod \"openstack-operator-controller-manager-7cff5c958-wscp6\" (UID: \"2627f8df-e54f-45e7-862a-fcacad250f2a\") " pod="openstack-operators/openstack-operator-controller-manager-7cff5c958-wscp6" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.733784 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjf8h\" (UniqueName: \"kubernetes.io/projected/3a364ec5-2c66-4d8d-8f51-b3b4357e7b67-kube-api-access-rjf8h\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-tsxsk\" (UID: \"3a364ec5-2c66-4d8d-8f51-b3b4357e7b67\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-tsxsk" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.733838 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6r5q\" (UniqueName: \"kubernetes.io/projected/50605e76-3c4b-480c-9a76-e84962f38851-kube-api-access-q6r5q\") pod \"watcher-operator-controller-manager-646675d848-l76vk\" (UID: \"50605e76-3c4b-480c-9a76-e84962f38851\") " pod="openstack-operators/watcher-operator-controller-manager-646675d848-l76vk" Oct 14 13:29:41 crc kubenswrapper[4725]: E1014 13:29:41.734416 4725 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Oct 14 13:29:41 crc kubenswrapper[4725]: E1014 13:29:41.734490 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2627f8df-e54f-45e7-862a-fcacad250f2a-cert podName:2627f8df-e54f-45e7-862a-fcacad250f2a nodeName:}" failed. No retries permitted until 2025-10-14 13:29:42.234471118 +0000 UTC m=+899.082905927 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2627f8df-e54f-45e7-862a-fcacad250f2a-cert") pod "openstack-operator-controller-manager-7cff5c958-wscp6" (UID: "2627f8df-e54f-45e7-862a-fcacad250f2a") : secret "webhook-server-cert" not found Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.764300 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjf8h\" (UniqueName: \"kubernetes.io/projected/3a364ec5-2c66-4d8d-8f51-b3b4357e7b67-kube-api-access-rjf8h\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-tsxsk\" (UID: \"3a364ec5-2c66-4d8d-8f51-b3b4357e7b67\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-tsxsk" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.771668 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6r5q\" (UniqueName: \"kubernetes.io/projected/50605e76-3c4b-480c-9a76-e84962f38851-kube-api-access-q6r5q\") pod \"watcher-operator-controller-manager-646675d848-l76vk\" (UID: \"50605e76-3c4b-480c-9a76-e84962f38851\") " pod="openstack-operators/watcher-operator-controller-manager-646675d848-l76vk" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.772305 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbzzk\" (UniqueName: \"kubernetes.io/projected/2627f8df-e54f-45e7-862a-fcacad250f2a-kube-api-access-gbzzk\") pod \"openstack-operator-controller-manager-7cff5c958-wscp6\" (UID: \"2627f8df-e54f-45e7-862a-fcacad250f2a\") " pod="openstack-operators/openstack-operator-controller-manager-7cff5c958-wscp6" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.782297 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-229z2\" (UniqueName: \"kubernetes.io/projected/cb31097e-d603-4f4d-8cc1-a5f1a841ea00-kube-api-access-229z2\") pod \"test-operator-controller-manager-ffcdd6c94-nhdz7\" (UID: \"cb31097e-d603-4f4d-8cc1-a5f1a841ea00\") " pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-nhdz7" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.814921 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-646675d848-l76vk" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.882370 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-tsxsk" Oct 14 13:29:41 crc kubenswrapper[4725]: I1014 13:29:41.938483 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec-cert\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757d5s454\" (UID: \"1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d5s454" Oct 14 13:29:41 crc kubenswrapper[4725]: E1014 13:29:41.938712 4725 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 14 13:29:41 crc kubenswrapper[4725]: E1014 13:29:41.938806 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec-cert podName:1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec nodeName:}" failed. No retries permitted until 2025-10-14 13:29:42.93878265 +0000 UTC m=+899.787217529 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec-cert") pod "openstack-baremetal-operator-controller-manager-6cc7fb757d5s454" (UID: "1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec") : secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 14 13:29:42 crc kubenswrapper[4725]: I1014 13:29:42.001057 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-nhdz7" Oct 14 13:29:42 crc kubenswrapper[4725]: I1014 13:29:42.090758 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-64f84fcdbb-wnx2r"] Oct 14 13:29:42 crc kubenswrapper[4725]: I1014 13:29:42.242351 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2627f8df-e54f-45e7-862a-fcacad250f2a-cert\") pod \"openstack-operator-controller-manager-7cff5c958-wscp6\" (UID: \"2627f8df-e54f-45e7-862a-fcacad250f2a\") " pod="openstack-operators/openstack-operator-controller-manager-7cff5c958-wscp6" Oct 14 13:29:42 crc kubenswrapper[4725]: I1014 13:29:42.249550 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2627f8df-e54f-45e7-862a-fcacad250f2a-cert\") pod \"openstack-operator-controller-manager-7cff5c958-wscp6\" (UID: \"2627f8df-e54f-45e7-862a-fcacad250f2a\") " pod="openstack-operators/openstack-operator-controller-manager-7cff5c958-wscp6" Oct 14 13:29:42 crc kubenswrapper[4725]: I1014 13:29:42.460737 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7cff5c958-wscp6" Oct 14 13:29:42 crc kubenswrapper[4725]: I1014 13:29:42.564186 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-59cdc64769-hbgnj"] Oct 14 13:29:42 crc kubenswrapper[4725]: I1014 13:29:42.571527 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-687df44cdb-h8bkv"] Oct 14 13:29:42 crc kubenswrapper[4725]: W1014 13:29:42.580834 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda9db202_78ec_4df7_9ead_374e287391a2.slice/crio-4013683d8bdbcff9121c0fee82526b95176bb253efbe608e8e7d77bba3545b21 WatchSource:0}: Error finding container 4013683d8bdbcff9121c0fee82526b95176bb253efbe608e8e7d77bba3545b21: Status 404 returned error can't find the container with id 4013683d8bdbcff9121c0fee82526b95176bb253efbe608e8e7d77bba3545b21 Oct 14 13:29:42 crc kubenswrapper[4725]: I1014 13:29:42.583683 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7bb46cd7d-pjkj4"] Oct 14 13:29:42 crc kubenswrapper[4725]: I1014 13:29:42.793923 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-wnx2r" event={"ID":"ea9bba90-8416-4b43-a2be-d41f635db481","Type":"ContainerStarted","Data":"b87de7333d4406c0d2e7bc48d69c2526db48edbbb4abb0e5d8bf81711779e462"} Oct 14 13:29:42 crc kubenswrapper[4725]: I1014 13:29:42.794965 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-pjkj4" event={"ID":"da9db202-78ec-4df7-9ead-374e287391a2","Type":"ContainerStarted","Data":"4013683d8bdbcff9121c0fee82526b95176bb253efbe608e8e7d77bba3545b21"} Oct 14 13:29:42 crc kubenswrapper[4725]: I1014 13:29:42.796270 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-h8bkv" event={"ID":"40be9ede-1b5e-4022-9626-8367074f88c1","Type":"ContainerStarted","Data":"bae2ed50b65d131730863a3d2434466f278ca0aac1491791255725f791fb8717"} Oct 14 13:29:42 crc kubenswrapper[4725]: I1014 13:29:42.797737 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-hbgnj" event={"ID":"f8ae62f1-e80d-4f8c-81c3-0c5c50338046","Type":"ContainerStarted","Data":"1c65e7e19fb14128d65c4d02518eac07dc284e81bcf6706375aa32e7c2240d04"} Oct 14 13:29:42 crc kubenswrapper[4725]: I1014 13:29:42.828777 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-6d9967f8dd-w9x95"] Oct 14 13:29:42 crc kubenswrapper[4725]: I1014 13:29:42.842320 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-5777b4f897-9q25r"] Oct 14 13:29:42 crc kubenswrapper[4725]: W1014 13:29:42.848579 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8dc80d7a_d38e_4baa_85ae_fa856c39b48f.slice/crio-5f4f8c88cb81fd486659a6a2e448de97c731213d84cbad00ec256482bbc207a7 WatchSource:0}: Error finding container 5f4f8c88cb81fd486659a6a2e448de97c731213d84cbad00ec256482bbc207a7: Status 404 returned error can't find the container with id 5f4f8c88cb81fd486659a6a2e448de97c731213d84cbad00ec256482bbc207a7 Oct 14 13:29:42 crc kubenswrapper[4725]: I1014 13:29:42.876707 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-74cb5cbc49-tt2xm"] Oct 14 13:29:42 crc kubenswrapper[4725]: I1014 13:29:42.884477 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-ddb98f99b-8jwrn"] Oct 14 13:29:42 crc kubenswrapper[4725]: I1014 13:29:42.919996 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d74794d9b-96vkx"] Oct 14 13:29:42 crc kubenswrapper[4725]: I1014 13:29:42.937293 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-585fc5b659-jgbh4"] Oct 14 13:29:42 crc kubenswrapper[4725]: W1014 13:29:42.948819 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod420f32f0_6ea7_4489_8165_215def02acf1.slice/crio-1199e1cfe7c2eea98be71af461663157a7fc08e78011745440dd0ee5e11537aa WatchSource:0}: Error finding container 1199e1cfe7c2eea98be71af461663157a7fc08e78011745440dd0ee5e11537aa: Status 404 returned error can't find the container with id 1199e1cfe7c2eea98be71af461663157a7fc08e78011745440dd0ee5e11537aa Oct 14 13:29:42 crc kubenswrapper[4725]: I1014 13:29:42.952058 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-664664cb68-b462h"] Oct 14 13:29:42 crc kubenswrapper[4725]: I1014 13:29:42.953352 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec-cert\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757d5s454\" (UID: \"1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d5s454" Oct 14 13:29:42 crc kubenswrapper[4725]: I1014 13:29:42.958222 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-797d478b46-55m5j"] Oct 14 13:29:42 crc kubenswrapper[4725]: I1014 13:29:42.958737 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec-cert\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757d5s454\" (UID: \"1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d5s454" Oct 14 13:29:43 crc kubenswrapper[4725]: I1014 13:29:43.003507 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d5s454" Oct 14 13:29:43 crc kubenswrapper[4725]: I1014 13:29:43.124852 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-tsxsk"] Oct 14 13:29:43 crc kubenswrapper[4725]: I1014 13:29:43.135220 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-vmdhg"] Oct 14 13:29:43 crc kubenswrapper[4725]: I1014 13:29:43.145843 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-57bb74c7bf-mvqp4"] Oct 14 13:29:43 crc kubenswrapper[4725]: I1014 13:29:43.150243 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-578874c84d-878xx"] Oct 14 13:29:43 crc kubenswrapper[4725]: I1014 13:29:43.168603 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-869cc7797f-fnf6m"] Oct 14 13:29:43 crc kubenswrapper[4725]: I1014 13:29:43.184511 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-646675d848-l76vk"] Oct 14 13:29:43 crc kubenswrapper[4725]: W1014 13:29:43.192754 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod720eb902_cb6c_4d90_b25b_eae97e6d055d.slice/crio-973c9f3c28356823d3bf61b154ea518e6037d33cb5c2188c6ec087e3f2e64f2e WatchSource:0}: Error finding container 973c9f3c28356823d3bf61b154ea518e6037d33cb5c2188c6ec087e3f2e64f2e: Status 404 returned error can't find the container with id 973c9f3c28356823d3bf61b154ea518e6037d33cb5c2188c6ec087e3f2e64f2e Oct 14 13:29:43 crc kubenswrapper[4725]: I1014 13:29:43.202809 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-ffcdd6c94-nhdz7"] Oct 14 13:29:43 crc kubenswrapper[4725]: I1014 13:29:43.211171 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-d258r"] Oct 14 13:29:43 crc kubenswrapper[4725]: I1014 13:29:43.215138 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-59578bc799-4jwx9"] Oct 14 13:29:43 crc kubenswrapper[4725]: E1014 13:29:43.221106 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:4b4a17fe08ce00e375afaaec6a28835f5c1784f03d11c4558376ac04130f3a9e,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hgphz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f4d5dfdc6-vmdhg_openstack-operators(3d451d86-1531-4612-a2c8-5be10e09f890): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 14 13:29:43 crc kubenswrapper[4725]: E1014 13:29:43.224697 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:315e558023b41ac1aa215082096995a03810c5b42910a33b00427ffcac9c6a14,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c9d9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-869cc7797f-fnf6m_openstack-operators(28bb98a9-5146-48a5-8f4f-a3a7766ab18c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 14 13:29:43 crc kubenswrapper[4725]: E1014 13:29:43.229872 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:98a5233f0596591acdf2c6a5838b08be108787cdb6ad1995b2b7886bac0fe6ca,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q6r5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-646675d848-l76vk_openstack-operators(50605e76-3c4b-480c-9a76-e84962f38851): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 14 13:29:43 crc kubenswrapper[4725]: I1014 13:29:43.232544 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7cff5c958-wscp6"] Oct 14 13:29:43 crc kubenswrapper[4725]: E1014 13:29:43.234559 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:582f7b1e411961b69f2e3c6b346aa25759b89f7720ed3fade1d363bf5d2dffc8,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s8qfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-59578bc799-4jwx9_openstack-operators(0ba97bf9-2b5b-495b-99e2-328a987535e1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 14 13:29:43 crc kubenswrapper[4725]: W1014 13:29:43.239666 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb31097e_d603_4f4d_8cc1_a5f1a841ea00.slice/crio-3ac92bbf352678cd9eb5570e584d9488f3d951c6b472e9f8db0b360d02e1f369 WatchSource:0}: Error finding container 3ac92bbf352678cd9eb5570e584d9488f3d951c6b472e9f8db0b360d02e1f369: Status 404 returned error can't find the container with id 3ac92bbf352678cd9eb5570e584d9488f3d951c6b472e9f8db0b360d02e1f369 Oct 14 13:29:43 crc kubenswrapper[4725]: W1014 13:29:43.243853 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb53b295_5121_4b18_9d3d_e3e981a64f2d.slice/crio-4711659f3bf6de794d9fe3a3c4a55149587a907e28ee398d5256c7679e33327a WatchSource:0}: Error finding container 4711659f3bf6de794d9fe3a3c4a55149587a907e28ee398d5256c7679e33327a: Status 404 returned error can't find the container with id 4711659f3bf6de794d9fe3a3c4a55149587a907e28ee398d5256c7679e33327a Oct 14 13:29:43 crc kubenswrapper[4725]: W1014 13:29:43.244171 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4436f5f_2c38_49ee_8e53_061e294f009f.slice/crio-f16c7224ae20d41b18febb4cdbc8275d036723f61e1bafb0ecd9357762b35fcb WatchSource:0}: Error finding container f16c7224ae20d41b18febb4cdbc8275d036723f61e1bafb0ecd9357762b35fcb: Status 404 returned error can't find the container with id f16c7224ae20d41b18febb4cdbc8275d036723f61e1bafb0ecd9357762b35fcb Oct 14 13:29:43 crc kubenswrapper[4725]: W1014 13:29:43.246127 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2627f8df_e54f_45e7_862a_fcacad250f2a.slice/crio-1aaa6d623cf55bf097b219eaf0fd2d9cfe36d79ac6e73650b477bce490ea5fdd WatchSource:0}: Error finding container 1aaa6d623cf55bf097b219eaf0fd2d9cfe36d79ac6e73650b477bce490ea5fdd: Status 404 returned error can't find the container with id 1aaa6d623cf55bf097b219eaf0fd2d9cfe36d79ac6e73650b477bce490ea5fdd Oct 14 13:29:43 crc kubenswrapper[4725]: E1014 13:29:43.246316 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:7e584b1c430441c8b6591dadeff32e065de8a185ad37ef90d2e08d37e59aab4a,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-229z2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-ffcdd6c94-nhdz7_openstack-operators(cb31097e-d603-4f4d-8cc1-a5f1a841ea00): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 14 13:29:43 crc kubenswrapper[4725]: E1014 13:29:43.249815 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:abe978f8da75223de5043cca50278ad4e28c8dd309883f502fe1e7a9998733b0,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nr7w4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-578874c84d-878xx_openstack-operators(fb53b295-5121-4b18-9d3d-e3e981a64f2d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 14 13:29:43 crc kubenswrapper[4725]: E1014 13:29:43.251226 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:09deecf840d38ff6af3c924729cf0a9444bc985848bfbe7c918019b88a6bc4d7,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nxhkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6d7c7ddf95-d258r_openstack-operators(e4436f5f-2c38-49ee-8e53-061e294f009f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 14 13:29:43 crc kubenswrapper[4725]: E1014 13:29:43.426700 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-fnf6m" podUID="28bb98a9-5146-48a5-8f4f-a3a7766ab18c" Oct 14 13:29:43 crc kubenswrapper[4725]: E1014 13:29:43.442068 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-646675d848-l76vk" podUID="50605e76-3c4b-480c-9a76-e84962f38851" Oct 14 13:29:43 crc kubenswrapper[4725]: E1014 13:29:43.460008 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-59578bc799-4jwx9" podUID="0ba97bf9-2b5b-495b-99e2-328a987535e1" Oct 14 13:29:43 crc kubenswrapper[4725]: I1014 13:29:43.514923 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d5s454"] Oct 14 13:29:43 crc kubenswrapper[4725]: E1014 13:29:43.545227 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-vmdhg" podUID="3d451d86-1531-4612-a2c8-5be10e09f890" Oct 14 13:29:43 crc kubenswrapper[4725]: W1014 13:29:43.555342 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ee65fef_639a_4cff_8e7e_b68f7dd3a4ec.slice/crio-8c9dd3f1fafe487b784f55fa1dceb432722f8a227c5d0192b6380d558d5e6f25 WatchSource:0}: Error finding container 8c9dd3f1fafe487b784f55fa1dceb432722f8a227c5d0192b6380d558d5e6f25: Status 404 returned error can't find the container with id 8c9dd3f1fafe487b784f55fa1dceb432722f8a227c5d0192b6380d558d5e6f25 Oct 14 13:29:43 crc kubenswrapper[4725]: E1014 13:29:43.586244 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-878xx" podUID="fb53b295-5121-4b18-9d3d-e3e981a64f2d" Oct 14 13:29:43 crc kubenswrapper[4725]: E1014 13:29:43.587546 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-d258r" podUID="e4436f5f-2c38-49ee-8e53-061e294f009f" Oct 14 13:29:43 crc kubenswrapper[4725]: E1014 13:29:43.605839 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-nhdz7" podUID="cb31097e-d603-4f4d-8cc1-a5f1a841ea00" Oct 14 13:29:43 crc kubenswrapper[4725]: I1014 13:29:43.854298 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-mvqp4" event={"ID":"720eb902-cb6c-4d90-b25b-eae97e6d055d","Type":"ContainerStarted","Data":"973c9f3c28356823d3bf61b154ea518e6037d33cb5c2188c6ec087e3f2e64f2e"} Oct 14 13:29:43 crc kubenswrapper[4725]: I1014 13:29:43.856908 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-646675d848-l76vk" event={"ID":"50605e76-3c4b-480c-9a76-e84962f38851","Type":"ContainerStarted","Data":"c99a7225eff05ad4268d4b0ef98754fcdb8cd546f4679a115cd36a4032f0d955"} Oct 14 13:29:43 crc kubenswrapper[4725]: I1014 13:29:43.856932 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-646675d848-l76vk" event={"ID":"50605e76-3c4b-480c-9a76-e84962f38851","Type":"ContainerStarted","Data":"e5d4b09910c98bed2b0692ce43fb7adaae46d1af65dc946f18cbbbbdf947ab49"} Oct 14 13:29:43 crc kubenswrapper[4725]: E1014 13:29:43.858488 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:98a5233f0596591acdf2c6a5838b08be108787cdb6ad1995b2b7886bac0fe6ca\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-646675d848-l76vk" podUID="50605e76-3c4b-480c-9a76-e84962f38851" Oct 14 13:29:43 crc kubenswrapper[4725]: I1014 13:29:43.864063 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-w9x95" event={"ID":"8dc80d7a-d38e-4baa-85ae-fa856c39b48f","Type":"ContainerStarted","Data":"5f4f8c88cb81fd486659a6a2e448de97c731213d84cbad00ec256482bbc207a7"} Oct 14 13:29:43 crc kubenswrapper[4725]: I1014 13:29:43.869588 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-tt2xm" event={"ID":"64f86a1d-32a0-4133-94c3-b59ab14a0d4e","Type":"ContainerStarted","Data":"54ca543133e85cb7e1943e6c9041b5df4f37d49c5577dc3692af5cca8102a61b"} Oct 14 13:29:43 crc kubenswrapper[4725]: I1014 13:29:43.871711 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d5s454" event={"ID":"1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec","Type":"ContainerStarted","Data":"8c9dd3f1fafe487b784f55fa1dceb432722f8a227c5d0192b6380d558d5e6f25"} Oct 14 13:29:43 crc kubenswrapper[4725]: I1014 13:29:43.876130 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-59578bc799-4jwx9" event={"ID":"0ba97bf9-2b5b-495b-99e2-328a987535e1","Type":"ContainerStarted","Data":"b1c73558aa06f359c431d2ca6803a83755d2ecea8fcc4e2f530627236cfeb1f5"} Oct 14 13:29:43 crc kubenswrapper[4725]: I1014 13:29:43.876167 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-59578bc799-4jwx9" event={"ID":"0ba97bf9-2b5b-495b-99e2-328a987535e1","Type":"ContainerStarted","Data":"a1059481d5e022dabc7e88ef7e55665bac656e4bdb62488a4a83960e5eff6e75"} Oct 14 13:29:43 crc kubenswrapper[4725]: E1014 13:29:43.878700 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:582f7b1e411961b69f2e3c6b346aa25759b89f7720ed3fade1d363bf5d2dffc8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-59578bc799-4jwx9" podUID="0ba97bf9-2b5b-495b-99e2-328a987535e1" Oct 14 13:29:43 crc kubenswrapper[4725]: I1014 13:29:43.881242 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-55m5j" event={"ID":"15e9c123-7b58-49ab-b7d7-f429e6b15c1e","Type":"ContainerStarted","Data":"cda67a050a5d50d6ce1087644819d20a1b5e5ee17d25ab4796677bf3f53e8172"} Oct 14 13:29:43 crc kubenswrapper[4725]: I1014 13:29:43.894855 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-fnf6m" event={"ID":"28bb98a9-5146-48a5-8f4f-a3a7766ab18c","Type":"ContainerStarted","Data":"b9563ac38b46e527fa285d52c8cbdf2dfd2bd9c2f078670fc84e5f0de7e3d68e"} Oct 14 13:29:43 crc kubenswrapper[4725]: I1014 13:29:43.894903 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-fnf6m" event={"ID":"28bb98a9-5146-48a5-8f4f-a3a7766ab18c","Type":"ContainerStarted","Data":"fd84e7c526ea696e0f231d942b9941752596c34adc09339c269f1fd80c7d0271"} Oct 14 13:29:43 crc kubenswrapper[4725]: E1014 13:29:43.906834 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:315e558023b41ac1aa215082096995a03810c5b42910a33b00427ffcac9c6a14\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-fnf6m" podUID="28bb98a9-5146-48a5-8f4f-a3a7766ab18c" Oct 14 13:29:44 crc kubenswrapper[4725]: I1014 13:29:44.005130 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-664664cb68-b462h" event={"ID":"420f32f0-6ea7-4489-8165-215def02acf1","Type":"ContainerStarted","Data":"1199e1cfe7c2eea98be71af461663157a7fc08e78011745440dd0ee5e11537aa"} Oct 14 13:29:44 crc kubenswrapper[4725]: I1014 13:29:44.005200 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-d258r" event={"ID":"e4436f5f-2c38-49ee-8e53-061e294f009f","Type":"ContainerStarted","Data":"d76ddae2f2fea0a4d25376e82965788509677a3ff4fb310ce04bd3ce25fb7149"} Oct 14 13:29:44 crc kubenswrapper[4725]: I1014 13:29:44.005225 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-d258r" event={"ID":"e4436f5f-2c38-49ee-8e53-061e294f009f","Type":"ContainerStarted","Data":"f16c7224ae20d41b18febb4cdbc8275d036723f61e1bafb0ecd9357762b35fcb"} Oct 14 13:29:44 crc kubenswrapper[4725]: I1014 13:29:44.005238 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-vmdhg" event={"ID":"3d451d86-1531-4612-a2c8-5be10e09f890","Type":"ContainerStarted","Data":"b8be03329e1a7cbc4ef9cf302d70f0ba5026e3ec476de4f4f12697d48cc1137a"} Oct 14 13:29:44 crc kubenswrapper[4725]: I1014 13:29:44.005252 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-vmdhg" event={"ID":"3d451d86-1531-4612-a2c8-5be10e09f890","Type":"ContainerStarted","Data":"65af664388b6a9255f477b061623c9b51393dcd9ba9c285f1ad9ea4c7cc45ac6"} Oct 14 13:29:44 crc kubenswrapper[4725]: E1014 13:29:44.007706 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:4b4a17fe08ce00e375afaaec6a28835f5c1784f03d11c4558376ac04130f3a9e\\\"\"" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-vmdhg" podUID="3d451d86-1531-4612-a2c8-5be10e09f890" Oct 14 13:29:44 crc kubenswrapper[4725]: E1014 13:29:44.008047 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:09deecf840d38ff6af3c924729cf0a9444bc985848bfbe7c918019b88a6bc4d7\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-d258r" podUID="e4436f5f-2c38-49ee-8e53-061e294f009f" Oct 14 13:29:44 crc kubenswrapper[4725]: I1014 13:29:44.034321 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-nhdz7" event={"ID":"cb31097e-d603-4f4d-8cc1-a5f1a841ea00","Type":"ContainerStarted","Data":"74c43837f401a173eff79c68aa1b1e9a4f6b26e68c25c2a6bd1ad97777017ec5"} Oct 14 13:29:44 crc kubenswrapper[4725]: I1014 13:29:44.034380 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-nhdz7" event={"ID":"cb31097e-d603-4f4d-8cc1-a5f1a841ea00","Type":"ContainerStarted","Data":"3ac92bbf352678cd9eb5570e584d9488f3d951c6b472e9f8db0b360d02e1f369"} Oct 14 13:29:44 crc kubenswrapper[4725]: E1014 13:29:44.042182 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:7e584b1c430441c8b6591dadeff32e065de8a185ad37ef90d2e08d37e59aab4a\\\"\"" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-nhdz7" podUID="cb31097e-d603-4f4d-8cc1-a5f1a841ea00" Oct 14 13:29:44 crc kubenswrapper[4725]: I1014 13:29:44.044130 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-96vkx" event={"ID":"f0b42b9f-7713-48ca-b148-c42b5d2006f3","Type":"ContainerStarted","Data":"5e38dcf997659d6f194098fef7d0b808110fe4e6f884d812f3f298efc4743923"} Oct 14 13:29:44 crc kubenswrapper[4725]: I1014 13:29:44.053751 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7cff5c958-wscp6" event={"ID":"2627f8df-e54f-45e7-862a-fcacad250f2a","Type":"ContainerStarted","Data":"2457b4bd27bbc30865677c2f68fceb55e9ff44700cb09eb048c2e4c483424616"} Oct 14 13:29:44 crc kubenswrapper[4725]: I1014 13:29:44.053933 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7cff5c958-wscp6" event={"ID":"2627f8df-e54f-45e7-862a-fcacad250f2a","Type":"ContainerStarted","Data":"5c87a21e3a1f90e23150d89b0caaea99d966dec5579e65b923c71a7b13c95fc0"} Oct 14 13:29:44 crc kubenswrapper[4725]: I1014 13:29:44.053996 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7cff5c958-wscp6" event={"ID":"2627f8df-e54f-45e7-862a-fcacad250f2a","Type":"ContainerStarted","Data":"1aaa6d623cf55bf097b219eaf0fd2d9cfe36d79ac6e73650b477bce490ea5fdd"} Oct 14 13:29:44 crc kubenswrapper[4725]: I1014 13:29:44.055632 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7cff5c958-wscp6" Oct 14 13:29:44 crc kubenswrapper[4725]: I1014 13:29:44.057897 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-jgbh4" event={"ID":"3316f5a0-820b-45ec-802a-dc3203f1d9fa","Type":"ContainerStarted","Data":"1fba7107bee932e1833b931ef567d2d56d8f68a2cfbe78587ea04965945f887b"} Oct 14 13:29:44 crc kubenswrapper[4725]: I1014 13:29:44.059716 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-8jwrn" event={"ID":"95db1b3c-9877-4e8f-b756-532ccfd5db7a","Type":"ContainerStarted","Data":"a130ff8cce415dfc1a3a36a342d7dab4c9edcd245fc5ba23fdbcd2bedb76f4cf"} Oct 14 13:29:44 crc kubenswrapper[4725]: I1014 13:29:44.073820 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-tsxsk" event={"ID":"3a364ec5-2c66-4d8d-8f51-b3b4357e7b67","Type":"ContainerStarted","Data":"03a93674910c864e8b8e59c6d7e6b0ba0782a1514f3e80d54bc0811a9331a8f5"} Oct 14 13:29:44 crc kubenswrapper[4725]: I1014 13:29:44.077616 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-878xx" event={"ID":"fb53b295-5121-4b18-9d3d-e3e981a64f2d","Type":"ContainerStarted","Data":"53b7f83833f9e23b33710c193f5bc2c373343e45e2697e8bcaba307d34de200e"} Oct 14 13:29:44 crc kubenswrapper[4725]: I1014 13:29:44.077675 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-878xx" event={"ID":"fb53b295-5121-4b18-9d3d-e3e981a64f2d","Type":"ContainerStarted","Data":"4711659f3bf6de794d9fe3a3c4a55149587a907e28ee398d5256c7679e33327a"} Oct 14 13:29:44 crc kubenswrapper[4725]: E1014 13:29:44.080255 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:abe978f8da75223de5043cca50278ad4e28c8dd309883f502fe1e7a9998733b0\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-878xx" podUID="fb53b295-5121-4b18-9d3d-e3e981a64f2d" Oct 14 13:29:44 crc kubenswrapper[4725]: I1014 13:29:44.084691 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-9q25r" event={"ID":"960fbcd9-1563-4f84-89ac-53694a2413de","Type":"ContainerStarted","Data":"4263242191dbf7f833a37231a8a819425aa11d88150f536e3020053cb7950de0"} Oct 14 13:29:44 crc kubenswrapper[4725]: I1014 13:29:44.662376 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7cff5c958-wscp6" podStartSLOduration=3.662354024 podStartE2EDuration="3.662354024s" podCreationTimestamp="2025-10-14 13:29:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:29:44.649010696 +0000 UTC m=+901.497445505" watchObservedRunningTime="2025-10-14 13:29:44.662354024 +0000 UTC m=+901.510788843" Oct 14 13:29:45 crc kubenswrapper[4725]: E1014 13:29:45.095876 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:582f7b1e411961b69f2e3c6b346aa25759b89f7720ed3fade1d363bf5d2dffc8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-59578bc799-4jwx9" podUID="0ba97bf9-2b5b-495b-99e2-328a987535e1" Oct 14 13:29:45 crc kubenswrapper[4725]: E1014 13:29:45.099378 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:7e584b1c430441c8b6591dadeff32e065de8a185ad37ef90d2e08d37e59aab4a\\\"\"" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-nhdz7" podUID="cb31097e-d603-4f4d-8cc1-a5f1a841ea00" Oct 14 13:29:45 crc kubenswrapper[4725]: E1014 13:29:45.100991 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:315e558023b41ac1aa215082096995a03810c5b42910a33b00427ffcac9c6a14\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-fnf6m" podUID="28bb98a9-5146-48a5-8f4f-a3a7766ab18c" Oct 14 13:29:45 crc kubenswrapper[4725]: E1014 13:29:45.101075 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:98a5233f0596591acdf2c6a5838b08be108787cdb6ad1995b2b7886bac0fe6ca\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-646675d848-l76vk" podUID="50605e76-3c4b-480c-9a76-e84962f38851" Oct 14 13:29:45 crc kubenswrapper[4725]: E1014 13:29:45.101260 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:4b4a17fe08ce00e375afaaec6a28835f5c1784f03d11c4558376ac04130f3a9e\\\"\"" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-vmdhg" podUID="3d451d86-1531-4612-a2c8-5be10e09f890" Oct 14 13:29:45 crc kubenswrapper[4725]: E1014 13:29:45.101503 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:09deecf840d38ff6af3c924729cf0a9444bc985848bfbe7c918019b88a6bc4d7\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-d258r" podUID="e4436f5f-2c38-49ee-8e53-061e294f009f" Oct 14 13:29:45 crc kubenswrapper[4725]: E1014 13:29:45.118256 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:abe978f8da75223de5043cca50278ad4e28c8dd309883f502fe1e7a9998733b0\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-878xx" podUID="fb53b295-5121-4b18-9d3d-e3e981a64f2d" Oct 14 13:29:52 crc kubenswrapper[4725]: I1014 13:29:52.472335 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7cff5c958-wscp6" Oct 14 13:29:55 crc kubenswrapper[4725]: E1014 13:29:55.150345 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:ee05f2b06405240a8fcdbd430a9e8983b4667f372548334307b68c154e389960" Oct 14 13:29:55 crc kubenswrapper[4725]: E1014 13:29:55.150946 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:ee05f2b06405240a8fcdbd430a9e8983b4667f372548334307b68c154e389960,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wjzrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-74cb5cbc49-tt2xm_openstack-operators(64f86a1d-32a0-4133-94c3-b59ab14a0d4e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 13:29:55 crc kubenswrapper[4725]: E1014 13:29:55.584976 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:063a7e65b4ba98f0506f269ff7525b446eae06a5ed4a61c18ffa33a886500867" Oct 14 13:29:55 crc kubenswrapper[4725]: E1014 13:29:55.585505 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:063a7e65b4ba98f0506f269ff7525b446eae06a5ed4a61c18ffa33a886500867,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-74mcg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-6d74794d9b-96vkx_openstack-operators(f0b42b9f-7713-48ca-b148-c42b5d2006f3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 13:29:56 crc kubenswrapper[4725]: E1014 13:29:56.070335 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:a17fc270857869fd1efe5020b2a1cb8c2abbd838f08de88f3a6a59e8754ec351" Oct 14 13:29:56 crc kubenswrapper[4725]: E1014 13:29:56.070922 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:a17fc270857869fd1efe5020b2a1cb8c2abbd838f08de88f3a6a59e8754ec351,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n68rm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-6cc7fb757d5s454_openstack-operators(1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 13:29:56 crc kubenswrapper[4725]: E1014 13:29:56.516257 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:5cfb2ae1092445950b39dd59caa9a8c9367f42fb8353a8c3848d3bc729f24492" Oct 14 13:29:56 crc kubenswrapper[4725]: E1014 13:29:56.516423 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:5cfb2ae1092445950b39dd59caa9a8c9367f42fb8353a8c3848d3bc729f24492,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-58lrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-585fc5b659-jgbh4_openstack-operators(3316f5a0-820b-45ec-802a-dc3203f1d9fa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 13:29:57 crc kubenswrapper[4725]: E1014 13:29:57.036739 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:79b43a69884631c635d2164b95a2d4ec68f5cb33f96da14764f1c710880f3997" Oct 14 13:29:57 crc kubenswrapper[4725]: E1014 13:29:57.036923 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:79b43a69884631c635d2164b95a2d4ec68f5cb33f96da14764f1c710880f3997,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-28qz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-ddb98f99b-8jwrn_openstack-operators(95db1b3c-9877-4e8f-b756-532ccfd5db7a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 13:29:57 crc kubenswrapper[4725]: E1014 13:29:57.482230 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-jgbh4" podUID="3316f5a0-820b-45ec-802a-dc3203f1d9fa" Oct 14 13:29:57 crc kubenswrapper[4725]: E1014 13:29:57.542326 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-8jwrn" podUID="95db1b3c-9877-4e8f-b756-532ccfd5db7a" Oct 14 13:29:57 crc kubenswrapper[4725]: E1014 13:29:57.563441 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d5s454" podUID="1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec" Oct 14 13:29:57 crc kubenswrapper[4725]: E1014 13:29:57.692011 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-96vkx" podUID="f0b42b9f-7713-48ca-b148-c42b5d2006f3" Oct 14 13:29:57 crc kubenswrapper[4725]: E1014 13:29:57.693399 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-tt2xm" podUID="64f86a1d-32a0-4133-94c3-b59ab14a0d4e" Oct 14 13:29:58 crc kubenswrapper[4725]: I1014 13:29:58.179344 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-55m5j" event={"ID":"15e9c123-7b58-49ab-b7d7-f429e6b15c1e","Type":"ContainerStarted","Data":"c540356557fc1640c318180b4a03ae9306e7cabb71b0726612603d3c469e37d8"} Oct 14 13:29:58 crc kubenswrapper[4725]: I1014 13:29:58.181101 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-96vkx" event={"ID":"f0b42b9f-7713-48ca-b148-c42b5d2006f3","Type":"ContainerStarted","Data":"ccd9edb00198526e9a48e19e70a69bd954e9ec8eb1f2b41f67ce4fe32ef547df"} Oct 14 13:29:58 crc kubenswrapper[4725]: E1014 13:29:58.182524 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:063a7e65b4ba98f0506f269ff7525b446eae06a5ed4a61c18ffa33a886500867\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-96vkx" podUID="f0b42b9f-7713-48ca-b148-c42b5d2006f3" Oct 14 13:29:58 crc kubenswrapper[4725]: I1014 13:29:58.183765 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-tt2xm" event={"ID":"64f86a1d-32a0-4133-94c3-b59ab14a0d4e","Type":"ContainerStarted","Data":"9f1d9c3da27bfd148b22a0b043b9bbb8e9f130fa3512e4b58196fe907048c872"} Oct 14 13:29:58 crc kubenswrapper[4725]: E1014 13:29:58.186033 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:ee05f2b06405240a8fcdbd430a9e8983b4667f372548334307b68c154e389960\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-tt2xm" podUID="64f86a1d-32a0-4133-94c3-b59ab14a0d4e" Oct 14 13:29:58 crc kubenswrapper[4725]: I1014 13:29:58.199173 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-wnx2r" event={"ID":"ea9bba90-8416-4b43-a2be-d41f635db481","Type":"ContainerStarted","Data":"6433807d537e80560e625f85ffe1be98ab3e746b40fbe1a5c1c92cd2110107e0"} Oct 14 13:29:58 crc kubenswrapper[4725]: I1014 13:29:58.200584 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-9q25r" event={"ID":"960fbcd9-1563-4f84-89ac-53694a2413de","Type":"ContainerStarted","Data":"e1e018912ed94233098ffa536981b80da70815fc3e02f447a23270f7bf9c9480"} Oct 14 13:29:58 crc kubenswrapper[4725]: I1014 13:29:58.201659 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-664664cb68-b462h" event={"ID":"420f32f0-6ea7-4489-8165-215def02acf1","Type":"ContainerStarted","Data":"c33d19499f60e7cf3081c7d6b9f3b20a54dc3ee029cc29cf0a857c75f4a9ce2c"} Oct 14 13:29:58 crc kubenswrapper[4725]: I1014 13:29:58.213603 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-8jwrn" event={"ID":"95db1b3c-9877-4e8f-b756-532ccfd5db7a","Type":"ContainerStarted","Data":"05be7acd1b69a5fc3b9361ee3c6886c296739c9620432e89a24aa91ba4225260"} Oct 14 13:29:58 crc kubenswrapper[4725]: E1014 13:29:58.214608 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:79b43a69884631c635d2164b95a2d4ec68f5cb33f96da14764f1c710880f3997\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-8jwrn" podUID="95db1b3c-9877-4e8f-b756-532ccfd5db7a" Oct 14 13:29:58 crc kubenswrapper[4725]: I1014 13:29:58.217299 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-tsxsk" event={"ID":"3a364ec5-2c66-4d8d-8f51-b3b4357e7b67","Type":"ContainerStarted","Data":"1d349030846e8722ab73fc9fe5585f9bcb4c9949524717556d054b6d4e3675c2"} Oct 14 13:29:58 crc kubenswrapper[4725]: I1014 13:29:58.218776 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-h8bkv" event={"ID":"40be9ede-1b5e-4022-9626-8367074f88c1","Type":"ContainerStarted","Data":"f6092db30e49e98f70c59c52f14ad08371ee7e65a4457c2087e8e8a5ffae73b5"} Oct 14 13:29:58 crc kubenswrapper[4725]: I1014 13:29:58.235705 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-pjkj4" event={"ID":"da9db202-78ec-4df7-9ead-374e287391a2","Type":"ContainerStarted","Data":"a1ad4debdb2d517677f009936d0d4661d4aefcd09dbe38aeb6b38f7c8f0a8009"} Oct 14 13:29:58 crc kubenswrapper[4725]: I1014 13:29:58.237707 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-jgbh4" event={"ID":"3316f5a0-820b-45ec-802a-dc3203f1d9fa","Type":"ContainerStarted","Data":"74be74da4aa1a4f5d7ba05ccecdaaeaac11af581a6317c3ab1fabbecd03c6ef0"} Oct 14 13:29:58 crc kubenswrapper[4725]: E1014 13:29:58.238947 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:5cfb2ae1092445950b39dd59caa9a8c9367f42fb8353a8c3848d3bc729f24492\\\"\"" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-jgbh4" podUID="3316f5a0-820b-45ec-802a-dc3203f1d9fa" Oct 14 13:29:58 crc kubenswrapper[4725]: I1014 13:29:58.243905 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-hbgnj" event={"ID":"f8ae62f1-e80d-4f8c-81c3-0c5c50338046","Type":"ContainerStarted","Data":"1d73cfcabf45ad8cfdccd43823fd0bddeaff7d4ab29adbd23e8c7b5185f24e55"} Oct 14 13:29:58 crc kubenswrapper[4725]: I1014 13:29:58.245534 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-w9x95" event={"ID":"8dc80d7a-d38e-4baa-85ae-fa856c39b48f","Type":"ContainerStarted","Data":"b0b3eb02579fe5c06fdc81982ecfd61966869501d00c733e552a2148f8fe7b09"} Oct 14 13:29:58 crc kubenswrapper[4725]: I1014 13:29:58.247193 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-mvqp4" event={"ID":"720eb902-cb6c-4d90-b25b-eae97e6d055d","Type":"ContainerStarted","Data":"bbd92973e04dded16d609439821a07092f2d8aa0f2a81d510d9edf8682cc78b9"} Oct 14 13:29:58 crc kubenswrapper[4725]: I1014 13:29:58.249561 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d5s454" event={"ID":"1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec","Type":"ContainerStarted","Data":"4f72d45fd4c265c2392061beb8a21ce4fb82d9ffdaa70ce79c38d75292f2c620"} Oct 14 13:29:58 crc kubenswrapper[4725]: E1014 13:29:58.256598 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:a17fc270857869fd1efe5020b2a1cb8c2abbd838f08de88f3a6a59e8754ec351\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d5s454" podUID="1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec" Oct 14 13:29:58 crc kubenswrapper[4725]: I1014 13:29:58.327119 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-tsxsk" podStartSLOduration=3.402213706 podStartE2EDuration="17.327097846s" podCreationTimestamp="2025-10-14 13:29:41 +0000 UTC" firstStartedPulling="2025-10-14 13:29:43.16130912 +0000 UTC m=+900.009743929" lastFinishedPulling="2025-10-14 13:29:57.08619327 +0000 UTC m=+913.934628069" observedRunningTime="2025-10-14 13:29:58.32470816 +0000 UTC m=+915.173142989" watchObservedRunningTime="2025-10-14 13:29:58.327097846 +0000 UTC m=+915.175532665" Oct 14 13:29:59 crc kubenswrapper[4725]: E1014 13:29:59.273506 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:79b43a69884631c635d2164b95a2d4ec68f5cb33f96da14764f1c710880f3997\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-8jwrn" podUID="95db1b3c-9877-4e8f-b756-532ccfd5db7a" Oct 14 13:29:59 crc kubenswrapper[4725]: E1014 13:29:59.273681 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:ee05f2b06405240a8fcdbd430a9e8983b4667f372548334307b68c154e389960\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-tt2xm" podUID="64f86a1d-32a0-4133-94c3-b59ab14a0d4e" Oct 14 13:29:59 crc kubenswrapper[4725]: E1014 13:29:59.273757 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:063a7e65b4ba98f0506f269ff7525b446eae06a5ed4a61c18ffa33a886500867\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-96vkx" podUID="f0b42b9f-7713-48ca-b148-c42b5d2006f3" Oct 14 13:29:59 crc kubenswrapper[4725]: E1014 13:29:59.274138 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:5cfb2ae1092445950b39dd59caa9a8c9367f42fb8353a8c3848d3bc729f24492\\\"\"" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-jgbh4" podUID="3316f5a0-820b-45ec-802a-dc3203f1d9fa" Oct 14 13:29:59 crc kubenswrapper[4725]: E1014 13:29:59.280118 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:a17fc270857869fd1efe5020b2a1cb8c2abbd838f08de88f3a6a59e8754ec351\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d5s454" podUID="1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec" Oct 14 13:30:00 crc kubenswrapper[4725]: I1014 13:30:00.133050 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340810-gkvzs"] Oct 14 13:30:00 crc kubenswrapper[4725]: I1014 13:30:00.137897 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-gkvzs" Oct 14 13:30:00 crc kubenswrapper[4725]: I1014 13:30:00.142170 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 13:30:00 crc kubenswrapper[4725]: I1014 13:30:00.142332 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 13:30:00 crc kubenswrapper[4725]: I1014 13:30:00.147639 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340810-gkvzs"] Oct 14 13:30:00 crc kubenswrapper[4725]: I1014 13:30:00.259131 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bhdf\" (UniqueName: \"kubernetes.io/projected/a18d61b0-f276-4eac-b1c6-bbbc679d5059-kube-api-access-9bhdf\") pod \"collect-profiles-29340810-gkvzs\" (UID: \"a18d61b0-f276-4eac-b1c6-bbbc679d5059\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-gkvzs" Oct 14 13:30:00 crc kubenswrapper[4725]: I1014 13:30:00.259196 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a18d61b0-f276-4eac-b1c6-bbbc679d5059-config-volume\") pod \"collect-profiles-29340810-gkvzs\" (UID: \"a18d61b0-f276-4eac-b1c6-bbbc679d5059\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-gkvzs" Oct 14 13:30:00 crc kubenswrapper[4725]: I1014 13:30:00.259221 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a18d61b0-f276-4eac-b1c6-bbbc679d5059-secret-volume\") pod \"collect-profiles-29340810-gkvzs\" (UID: \"a18d61b0-f276-4eac-b1c6-bbbc679d5059\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-gkvzs" Oct 14 13:30:00 crc kubenswrapper[4725]: I1014 13:30:00.361254 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bhdf\" (UniqueName: \"kubernetes.io/projected/a18d61b0-f276-4eac-b1c6-bbbc679d5059-kube-api-access-9bhdf\") pod \"collect-profiles-29340810-gkvzs\" (UID: \"a18d61b0-f276-4eac-b1c6-bbbc679d5059\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-gkvzs" Oct 14 13:30:00 crc kubenswrapper[4725]: I1014 13:30:00.361306 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a18d61b0-f276-4eac-b1c6-bbbc679d5059-config-volume\") pod \"collect-profiles-29340810-gkvzs\" (UID: \"a18d61b0-f276-4eac-b1c6-bbbc679d5059\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-gkvzs" Oct 14 13:30:00 crc kubenswrapper[4725]: I1014 13:30:00.361334 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a18d61b0-f276-4eac-b1c6-bbbc679d5059-secret-volume\") pod \"collect-profiles-29340810-gkvzs\" (UID: \"a18d61b0-f276-4eac-b1c6-bbbc679d5059\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-gkvzs" Oct 14 13:30:00 crc kubenswrapper[4725]: I1014 13:30:00.362762 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a18d61b0-f276-4eac-b1c6-bbbc679d5059-config-volume\") pod \"collect-profiles-29340810-gkvzs\" (UID: \"a18d61b0-f276-4eac-b1c6-bbbc679d5059\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-gkvzs" Oct 14 13:30:00 crc kubenswrapper[4725]: I1014 13:30:00.369717 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a18d61b0-f276-4eac-b1c6-bbbc679d5059-secret-volume\") pod \"collect-profiles-29340810-gkvzs\" (UID: \"a18d61b0-f276-4eac-b1c6-bbbc679d5059\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-gkvzs" Oct 14 13:30:00 crc kubenswrapper[4725]: I1014 13:30:00.377730 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bhdf\" (UniqueName: \"kubernetes.io/projected/a18d61b0-f276-4eac-b1c6-bbbc679d5059-kube-api-access-9bhdf\") pod \"collect-profiles-29340810-gkvzs\" (UID: \"a18d61b0-f276-4eac-b1c6-bbbc679d5059\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-gkvzs" Oct 14 13:30:00 crc kubenswrapper[4725]: I1014 13:30:00.460928 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-gkvzs" Oct 14 13:30:02 crc kubenswrapper[4725]: I1014 13:30:02.521053 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 13:30:02 crc kubenswrapper[4725]: I1014 13:30:02.521435 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 13:30:02 crc kubenswrapper[4725]: I1014 13:30:02.696524 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340810-gkvzs"] Oct 14 13:30:02 crc kubenswrapper[4725]: W1014 13:30:02.718845 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda18d61b0_f276_4eac_b1c6_bbbc679d5059.slice/crio-1d7f3314996c261a8635f44e2876351f48e0ee87077ac445200405f2a1ef2c1b WatchSource:0}: Error finding container 1d7f3314996c261a8635f44e2876351f48e0ee87077ac445200405f2a1ef2c1b: Status 404 returned error can't find the container with id 1d7f3314996c261a8635f44e2876351f48e0ee87077ac445200405f2a1ef2c1b Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.288893 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-vmdhg" event={"ID":"3d451d86-1531-4612-a2c8-5be10e09f890","Type":"ContainerStarted","Data":"ac2961d64e58239768c62468ed6214db0ef755ec6679b346d30b9b07a9e6eb48"} Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.289468 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-vmdhg" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.290397 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-gkvzs" event={"ID":"a18d61b0-f276-4eac-b1c6-bbbc679d5059","Type":"ContainerStarted","Data":"a0f440f27b514c29f4343e16bd799976f5fd43f6bc903fa5e31645f37f6b9972"} Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.290436 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-gkvzs" event={"ID":"a18d61b0-f276-4eac-b1c6-bbbc679d5059","Type":"ContainerStarted","Data":"1d7f3314996c261a8635f44e2876351f48e0ee87077ac445200405f2a1ef2c1b"} Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.291930 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-hbgnj" event={"ID":"f8ae62f1-e80d-4f8c-81c3-0c5c50338046","Type":"ContainerStarted","Data":"5c9a6d9a7cda0c548507c0212621e042edb14a15986345c2eb7b521a2ff52616"} Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.292129 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-hbgnj" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.294669 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-9q25r" event={"ID":"960fbcd9-1563-4f84-89ac-53694a2413de","Type":"ContainerStarted","Data":"782bb0ed04ad9ddfb329ee89ad2899d098040290cb1bf0299aa9232d76123fa1"} Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.294846 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-9q25r" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.296338 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-646675d848-l76vk" event={"ID":"50605e76-3c4b-480c-9a76-e84962f38851","Type":"ContainerStarted","Data":"41d4738a2733a042d6a47b57cc8b3cadd5705f18c477522513858e5f65017c0d"} Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.296531 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-646675d848-l76vk" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.297897 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-55m5j" event={"ID":"15e9c123-7b58-49ab-b7d7-f429e6b15c1e","Type":"ContainerStarted","Data":"b3cc214f2a010fe6cc8eb00ae354a9ddda10fb3a1fbb6c8d1b8e633e171891c7"} Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.297949 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-55m5j" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.299364 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-pjkj4" event={"ID":"da9db202-78ec-4df7-9ead-374e287391a2","Type":"ContainerStarted","Data":"c759c64e9bbce6e5c16a1c5c856bba78edc4cd5d3e0d3a718dc4cb6dcad82ece"} Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.299945 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-pjkj4" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.301338 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-nhdz7" event={"ID":"cb31097e-d603-4f4d-8cc1-a5f1a841ea00","Type":"ContainerStarted","Data":"093be7b648d54e5c1face06e51898c4dc1c9b4bca523d441ceb31d3699dcc1f9"} Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.301611 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-pjkj4" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.301961 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-nhdz7" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.302055 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-9q25r" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.303172 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-mvqp4" event={"ID":"720eb902-cb6c-4d90-b25b-eae97e6d055d","Type":"ContainerStarted","Data":"3812c56e33182e1d9c8c41af4d7d3d0c0aea7b48d86aade1eb600fada2764efc"} Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.303476 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-mvqp4" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.303731 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-55m5j" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.304667 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-h8bkv" event={"ID":"40be9ede-1b5e-4022-9626-8367074f88c1","Type":"ContainerStarted","Data":"e73f3e791b740b062e9a09d575e8fafdc26430c935c5c1e062cec214a29feb80"} Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.304830 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-h8bkv" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.306283 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-h8bkv" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.306368 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-fnf6m" event={"ID":"28bb98a9-5146-48a5-8f4f-a3a7766ab18c","Type":"ContainerStarted","Data":"1c71f16ed66aae02e426a9579d70210cab4217900f585563a0ad643ef000342e"} Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.306556 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-fnf6m" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.307128 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-mvqp4" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.308671 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-664664cb68-b462h" event={"ID":"420f32f0-6ea7-4489-8165-215def02acf1","Type":"ContainerStarted","Data":"7663f7e1dc1045a453d8bc86f6d48f41c2d5c6350cded58b5d82e6e831ed6a79"} Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.308814 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-664664cb68-b462h" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.309842 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-hbgnj" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.310372 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-664664cb68-b462h" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.310741 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-wnx2r" event={"ID":"ea9bba90-8416-4b43-a2be-d41f635db481","Type":"ContainerStarted","Data":"35d23df9e5963205f5c48b05deffafdf5a1fcaa3810b86bf3b67ccbaa33f380f"} Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.310967 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-wnx2r" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.312545 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-w9x95" event={"ID":"8dc80d7a-d38e-4baa-85ae-fa856c39b48f","Type":"ContainerStarted","Data":"f6a3cce5724b3bb8581405438b521546ac5413e736d913f8e47cefa657a3fe51"} Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.312711 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-w9x95" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.312754 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-wnx2r" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.317180 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-w9x95" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.318122 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-d258r" event={"ID":"e4436f5f-2c38-49ee-8e53-061e294f009f","Type":"ContainerStarted","Data":"d7c001d728ba20e6f0530bc92869f572ef402b37357af1bff99da9e8071b9544"} Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.318333 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-d258r" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.320232 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-878xx" event={"ID":"fb53b295-5121-4b18-9d3d-e3e981a64f2d","Type":"ContainerStarted","Data":"9ac9e7089fac63e5ee384ce409e4a38167b8d8f44805306ff81c22583915dc25"} Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.320484 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-878xx" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.322167 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-59578bc799-4jwx9" event={"ID":"0ba97bf9-2b5b-495b-99e2-328a987535e1","Type":"ContainerStarted","Data":"90d03771cc784e32a79470a6f3e7d61dd18f2e31a99a30ab03f2aea5ea2bd5b1"} Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.322334 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-59578bc799-4jwx9" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.333293 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-vmdhg" podStartSLOduration=3.142059917 podStartE2EDuration="22.3332575s" podCreationTimestamp="2025-10-14 13:29:41 +0000 UTC" firstStartedPulling="2025-10-14 13:29:43.22093727 +0000 UTC m=+900.069372079" lastFinishedPulling="2025-10-14 13:30:02.412134853 +0000 UTC m=+919.260569662" observedRunningTime="2025-10-14 13:30:03.333079355 +0000 UTC m=+920.181514164" watchObservedRunningTime="2025-10-14 13:30:03.3332575 +0000 UTC m=+920.181692309" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.399578 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-hbgnj" podStartSLOduration=8.861156751 podStartE2EDuration="23.399557313s" podCreationTimestamp="2025-10-14 13:29:40 +0000 UTC" firstStartedPulling="2025-10-14 13:29:42.584652212 +0000 UTC m=+899.433087021" lastFinishedPulling="2025-10-14 13:29:57.123052774 +0000 UTC m=+913.971487583" observedRunningTime="2025-10-14 13:30:03.396198921 +0000 UTC m=+920.244633730" watchObservedRunningTime="2025-10-14 13:30:03.399557313 +0000 UTC m=+920.247992122" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.463000 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-55m5j" podStartSLOduration=9.340619975 podStartE2EDuration="23.462979849s" podCreationTimestamp="2025-10-14 13:29:40 +0000 UTC" firstStartedPulling="2025-10-14 13:29:42.959780524 +0000 UTC m=+899.808215333" lastFinishedPulling="2025-10-14 13:29:57.082140398 +0000 UTC m=+913.930575207" observedRunningTime="2025-10-14 13:30:03.460266974 +0000 UTC m=+920.308701783" watchObservedRunningTime="2025-10-14 13:30:03.462979849 +0000 UTC m=+920.311414658" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.526430 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-h8bkv" podStartSLOduration=9.023079947 podStartE2EDuration="23.526405065s" podCreationTimestamp="2025-10-14 13:29:40 +0000 UTC" firstStartedPulling="2025-10-14 13:29:42.582696457 +0000 UTC m=+899.431131266" lastFinishedPulling="2025-10-14 13:29:57.086021575 +0000 UTC m=+913.934456384" observedRunningTime="2025-10-14 13:30:03.520746849 +0000 UTC m=+920.369181688" watchObservedRunningTime="2025-10-14 13:30:03.526405065 +0000 UTC m=+920.374839874" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.565500 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-wnx2r" podStartSLOduration=8.593361464000001 podStartE2EDuration="23.56548159s" podCreationTimestamp="2025-10-14 13:29:40 +0000 UTC" firstStartedPulling="2025-10-14 13:29:42.149743875 +0000 UTC m=+898.998178684" lastFinishedPulling="2025-10-14 13:29:57.121864001 +0000 UTC m=+913.970298810" observedRunningTime="2025-10-14 13:30:03.564611405 +0000 UTC m=+920.413046224" watchObservedRunningTime="2025-10-14 13:30:03.56548159 +0000 UTC m=+920.413916409" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.605021 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-d258r" podStartSLOduration=4.620845351 podStartE2EDuration="23.605000737s" podCreationTimestamp="2025-10-14 13:29:40 +0000 UTC" firstStartedPulling="2025-10-14 13:29:43.251139261 +0000 UTC m=+900.099574070" lastFinishedPulling="2025-10-14 13:30:02.235294657 +0000 UTC m=+919.083729456" observedRunningTime="2025-10-14 13:30:03.599962608 +0000 UTC m=+920.448397447" watchObservedRunningTime="2025-10-14 13:30:03.605000737 +0000 UTC m=+920.453435546" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.688699 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-664664cb68-b462h" podStartSLOduration=8.524812022999999 podStartE2EDuration="22.688682069s" podCreationTimestamp="2025-10-14 13:29:41 +0000 UTC" firstStartedPulling="2025-10-14 13:29:42.959739193 +0000 UTC m=+899.808174002" lastFinishedPulling="2025-10-14 13:29:57.123609239 +0000 UTC m=+913.972044048" observedRunningTime="2025-10-14 13:30:03.687694352 +0000 UTC m=+920.536129161" watchObservedRunningTime="2025-10-14 13:30:03.688682069 +0000 UTC m=+920.537116878" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.690408 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-gkvzs" podStartSLOduration=3.690401817 podStartE2EDuration="3.690401817s" podCreationTimestamp="2025-10-14 13:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:30:03.628972606 +0000 UTC m=+920.477407435" watchObservedRunningTime="2025-10-14 13:30:03.690401817 +0000 UTC m=+920.538836626" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.731169 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-59578bc799-4jwx9" podStartSLOduration=4.554310082 podStartE2EDuration="23.731153459s" podCreationTimestamp="2025-10-14 13:29:40 +0000 UTC" firstStartedPulling="2025-10-14 13:29:43.23437626 +0000 UTC m=+900.082811069" lastFinishedPulling="2025-10-14 13:30:02.411219637 +0000 UTC m=+919.259654446" observedRunningTime="2025-10-14 13:30:03.720059133 +0000 UTC m=+920.568493942" watchObservedRunningTime="2025-10-14 13:30:03.731153459 +0000 UTC m=+920.579588268" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.763809 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-mvqp4" podStartSLOduration=9.859288656 podStartE2EDuration="23.763794306s" podCreationTimestamp="2025-10-14 13:29:40 +0000 UTC" firstStartedPulling="2025-10-14 13:29:43.220652272 +0000 UTC m=+900.069087081" lastFinishedPulling="2025-10-14 13:29:57.125157922 +0000 UTC m=+913.973592731" observedRunningTime="2025-10-14 13:30:03.759278662 +0000 UTC m=+920.607713471" watchObservedRunningTime="2025-10-14 13:30:03.763794306 +0000 UTC m=+920.612229115" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.805181 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-nhdz7" podStartSLOduration=3.625612443 podStartE2EDuration="22.805164365s" podCreationTimestamp="2025-10-14 13:29:41 +0000 UTC" firstStartedPulling="2025-10-14 13:29:43.246180875 +0000 UTC m=+900.094615684" lastFinishedPulling="2025-10-14 13:30:02.425732797 +0000 UTC m=+919.274167606" observedRunningTime="2025-10-14 13:30:03.796903917 +0000 UTC m=+920.645338746" watchObservedRunningTime="2025-10-14 13:30:03.805164365 +0000 UTC m=+920.653599174" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.815323 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-w9x95" podStartSLOduration=9.550893321 podStartE2EDuration="23.815306014s" podCreationTimestamp="2025-10-14 13:29:40 +0000 UTC" firstStartedPulling="2025-10-14 13:29:42.857204061 +0000 UTC m=+899.705638870" lastFinishedPulling="2025-10-14 13:29:57.121616744 +0000 UTC m=+913.970051563" observedRunningTime="2025-10-14 13:30:03.811618162 +0000 UTC m=+920.660052971" watchObservedRunningTime="2025-10-14 13:30:03.815306014 +0000 UTC m=+920.663740823" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.832877 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-878xx" podStartSLOduration=3.6713365810000003 podStartE2EDuration="22.832859267s" podCreationTimestamp="2025-10-14 13:29:41 +0000 UTC" firstStartedPulling="2025-10-14 13:29:43.24965491 +0000 UTC m=+900.098089719" lastFinishedPulling="2025-10-14 13:30:02.411177596 +0000 UTC m=+919.259612405" observedRunningTime="2025-10-14 13:30:03.828153177 +0000 UTC m=+920.676587986" watchObservedRunningTime="2025-10-14 13:30:03.832859267 +0000 UTC m=+920.681294076" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.851363 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-646675d848-l76vk" podStartSLOduration=4.242490407 podStartE2EDuration="22.851346366s" podCreationTimestamp="2025-10-14 13:29:41 +0000 UTC" firstStartedPulling="2025-10-14 13:29:43.229707981 +0000 UTC m=+900.078142790" lastFinishedPulling="2025-10-14 13:30:01.83856394 +0000 UTC m=+918.686998749" observedRunningTime="2025-10-14 13:30:03.849517225 +0000 UTC m=+920.697952044" watchObservedRunningTime="2025-10-14 13:30:03.851346366 +0000 UTC m=+920.699781165" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.880068 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-pjkj4" podStartSLOduration=9.347253208 podStartE2EDuration="23.880050666s" podCreationTimestamp="2025-10-14 13:29:40 +0000 UTC" firstStartedPulling="2025-10-14 13:29:42.590664047 +0000 UTC m=+899.439098856" lastFinishedPulling="2025-10-14 13:29:57.123461505 +0000 UTC m=+913.971896314" observedRunningTime="2025-10-14 13:30:03.878984976 +0000 UTC m=+920.727419785" watchObservedRunningTime="2025-10-14 13:30:03.880050666 +0000 UTC m=+920.728485475" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.895932 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-fnf6m" podStartSLOduration=3.6992438979999998 podStartE2EDuration="22.895913881s" podCreationTimestamp="2025-10-14 13:29:41 +0000 UTC" firstStartedPulling="2025-10-14 13:29:43.224589961 +0000 UTC m=+900.073024770" lastFinishedPulling="2025-10-14 13:30:02.421259944 +0000 UTC m=+919.269694753" observedRunningTime="2025-10-14 13:30:03.893256079 +0000 UTC m=+920.741690888" watchObservedRunningTime="2025-10-14 13:30:03.895913881 +0000 UTC m=+920.744348700" Oct 14 13:30:03 crc kubenswrapper[4725]: I1014 13:30:03.913533 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-9q25r" podStartSLOduration=9.722506335 podStartE2EDuration="23.913517007s" podCreationTimestamp="2025-10-14 13:29:40 +0000 UTC" firstStartedPulling="2025-10-14 13:29:42.930975402 +0000 UTC m=+899.779410211" lastFinishedPulling="2025-10-14 13:29:57.121986074 +0000 UTC m=+913.970420883" observedRunningTime="2025-10-14 13:30:03.913317981 +0000 UTC m=+920.761752800" watchObservedRunningTime="2025-10-14 13:30:03.913517007 +0000 UTC m=+920.761951816" Oct 14 13:30:04 crc kubenswrapper[4725]: I1014 13:30:04.332756 4725 generic.go:334] "Generic (PLEG): container finished" podID="a18d61b0-f276-4eac-b1c6-bbbc679d5059" containerID="a0f440f27b514c29f4343e16bd799976f5fd43f6bc903fa5e31645f37f6b9972" exitCode=0 Oct 14 13:30:04 crc kubenswrapper[4725]: I1014 13:30:04.332813 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-gkvzs" event={"ID":"a18d61b0-f276-4eac-b1c6-bbbc679d5059","Type":"ContainerDied","Data":"a0f440f27b514c29f4343e16bd799976f5fd43f6bc903fa5e31645f37f6b9972"} Oct 14 13:30:05 crc kubenswrapper[4725]: I1014 13:30:05.649429 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-gkvzs" Oct 14 13:30:05 crc kubenswrapper[4725]: I1014 13:30:05.840501 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a18d61b0-f276-4eac-b1c6-bbbc679d5059-secret-volume\") pod \"a18d61b0-f276-4eac-b1c6-bbbc679d5059\" (UID: \"a18d61b0-f276-4eac-b1c6-bbbc679d5059\") " Oct 14 13:30:05 crc kubenswrapper[4725]: I1014 13:30:05.840695 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bhdf\" (UniqueName: \"kubernetes.io/projected/a18d61b0-f276-4eac-b1c6-bbbc679d5059-kube-api-access-9bhdf\") pod \"a18d61b0-f276-4eac-b1c6-bbbc679d5059\" (UID: \"a18d61b0-f276-4eac-b1c6-bbbc679d5059\") " Oct 14 13:30:05 crc kubenswrapper[4725]: I1014 13:30:05.841004 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a18d61b0-f276-4eac-b1c6-bbbc679d5059-config-volume\") pod \"a18d61b0-f276-4eac-b1c6-bbbc679d5059\" (UID: \"a18d61b0-f276-4eac-b1c6-bbbc679d5059\") " Oct 14 13:30:05 crc kubenswrapper[4725]: I1014 13:30:05.841503 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a18d61b0-f276-4eac-b1c6-bbbc679d5059-config-volume" (OuterVolumeSpecName: "config-volume") pod "a18d61b0-f276-4eac-b1c6-bbbc679d5059" (UID: "a18d61b0-f276-4eac-b1c6-bbbc679d5059"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:30:05 crc kubenswrapper[4725]: I1014 13:30:05.845976 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a18d61b0-f276-4eac-b1c6-bbbc679d5059-kube-api-access-9bhdf" (OuterVolumeSpecName: "kube-api-access-9bhdf") pod "a18d61b0-f276-4eac-b1c6-bbbc679d5059" (UID: "a18d61b0-f276-4eac-b1c6-bbbc679d5059"). InnerVolumeSpecName "kube-api-access-9bhdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:30:05 crc kubenswrapper[4725]: I1014 13:30:05.848772 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a18d61b0-f276-4eac-b1c6-bbbc679d5059-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a18d61b0-f276-4eac-b1c6-bbbc679d5059" (UID: "a18d61b0-f276-4eac-b1c6-bbbc679d5059"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:30:05 crc kubenswrapper[4725]: I1014 13:30:05.942896 4725 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a18d61b0-f276-4eac-b1c6-bbbc679d5059-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 13:30:05 crc kubenswrapper[4725]: I1014 13:30:05.942924 4725 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a18d61b0-f276-4eac-b1c6-bbbc679d5059-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 13:30:05 crc kubenswrapper[4725]: I1014 13:30:05.942934 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bhdf\" (UniqueName: \"kubernetes.io/projected/a18d61b0-f276-4eac-b1c6-bbbc679d5059-kube-api-access-9bhdf\") on node \"crc\" DevicePath \"\"" Oct 14 13:30:06 crc kubenswrapper[4725]: I1014 13:30:06.353699 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-gkvzs" event={"ID":"a18d61b0-f276-4eac-b1c6-bbbc679d5059","Type":"ContainerDied","Data":"1d7f3314996c261a8635f44e2876351f48e0ee87077ac445200405f2a1ef2c1b"} Oct 14 13:30:06 crc kubenswrapper[4725]: I1014 13:30:06.354386 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d7f3314996c261a8635f44e2876351f48e0ee87077ac445200405f2a1ef2c1b" Oct 14 13:30:06 crc kubenswrapper[4725]: I1014 13:30:06.353795 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340810-gkvzs" Oct 14 13:30:11 crc kubenswrapper[4725]: I1014 13:30:11.344249 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-59578bc799-4jwx9" Oct 14 13:30:11 crc kubenswrapper[4725]: I1014 13:30:11.387611 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d5s454" event={"ID":"1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec","Type":"ContainerStarted","Data":"b3275598ff5c56595804e025c8e22db6e80ca2af0cca81a619b39477cff5199a"} Oct 14 13:30:11 crc kubenswrapper[4725]: I1014 13:30:11.389174 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d5s454" Oct 14 13:30:11 crc kubenswrapper[4725]: I1014 13:30:11.431118 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d5s454" podStartSLOduration=4.656827292 podStartE2EDuration="31.431093337s" podCreationTimestamp="2025-10-14 13:29:40 +0000 UTC" firstStartedPulling="2025-10-14 13:29:43.561858801 +0000 UTC m=+900.410293610" lastFinishedPulling="2025-10-14 13:30:10.336124846 +0000 UTC m=+927.184559655" observedRunningTime="2025-10-14 13:30:11.427612601 +0000 UTC m=+928.276047410" watchObservedRunningTime="2025-10-14 13:30:11.431093337 +0000 UTC m=+928.279528166" Oct 14 13:30:11 crc kubenswrapper[4725]: I1014 13:30:11.494785 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-d258r" Oct 14 13:30:11 crc kubenswrapper[4725]: I1014 13:30:11.516847 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-fnf6m" Oct 14 13:30:11 crc kubenswrapper[4725]: I1014 13:30:11.550287 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-vmdhg" Oct 14 13:30:11 crc kubenswrapper[4725]: I1014 13:30:11.618363 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-878xx" Oct 14 13:30:11 crc kubenswrapper[4725]: I1014 13:30:11.817268 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-646675d848-l76vk" Oct 14 13:30:12 crc kubenswrapper[4725]: I1014 13:30:12.005564 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-nhdz7" Oct 14 13:30:13 crc kubenswrapper[4725]: I1014 13:30:13.404041 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-jgbh4" event={"ID":"3316f5a0-820b-45ec-802a-dc3203f1d9fa","Type":"ContainerStarted","Data":"37614d27128184ae3da9a1025e6d136151011be56505d3146773b4c26291400a"} Oct 14 13:30:13 crc kubenswrapper[4725]: I1014 13:30:13.405943 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-tt2xm" event={"ID":"64f86a1d-32a0-4133-94c3-b59ab14a0d4e","Type":"ContainerStarted","Data":"e866baba44a0017ee04b2f9ac3fac2ee62cb46f5a75514fafaab82fd97759987"} Oct 14 13:30:13 crc kubenswrapper[4725]: I1014 13:30:13.406010 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-jgbh4" Oct 14 13:30:13 crc kubenswrapper[4725]: I1014 13:30:13.406424 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-tt2xm" Oct 14 13:30:13 crc kubenswrapper[4725]: I1014 13:30:13.430594 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-tt2xm" podStartSLOduration=3.953879879 podStartE2EDuration="33.430574257s" podCreationTimestamp="2025-10-14 13:29:40 +0000 UTC" firstStartedPulling="2025-10-14 13:29:42.901736677 +0000 UTC m=+899.750171496" lastFinishedPulling="2025-10-14 13:30:12.378431035 +0000 UTC m=+929.226865874" observedRunningTime="2025-10-14 13:30:13.424331935 +0000 UTC m=+930.272766774" watchObservedRunningTime="2025-10-14 13:30:13.430574257 +0000 UTC m=+930.279009076" Oct 14 13:30:13 crc kubenswrapper[4725]: I1014 13:30:13.440561 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-jgbh4" podStartSLOduration=3.7354841690000002 podStartE2EDuration="33.440544331s" podCreationTimestamp="2025-10-14 13:29:40 +0000 UTC" firstStartedPulling="2025-10-14 13:29:42.917681766 +0000 UTC m=+899.766116575" lastFinishedPulling="2025-10-14 13:30:12.622741928 +0000 UTC m=+929.471176737" observedRunningTime="2025-10-14 13:30:13.437993901 +0000 UTC m=+930.286428720" watchObservedRunningTime="2025-10-14 13:30:13.440544331 +0000 UTC m=+930.288979140" Oct 14 13:30:15 crc kubenswrapper[4725]: I1014 13:30:15.437748 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-96vkx" event={"ID":"f0b42b9f-7713-48ca-b148-c42b5d2006f3","Type":"ContainerStarted","Data":"87f9cf5360caf5da7516780b17ba8a705c17bbd58f902711eab36580de443831"} Oct 14 13:30:15 crc kubenswrapper[4725]: I1014 13:30:15.438605 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-96vkx" Oct 14 13:30:15 crc kubenswrapper[4725]: I1014 13:30:15.440486 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-8jwrn" event={"ID":"95db1b3c-9877-4e8f-b756-532ccfd5db7a","Type":"ContainerStarted","Data":"4a2ba750479943a86b0a3ae4afb1a9b700a25964af3938570e459a9b64e756f4"} Oct 14 13:30:15 crc kubenswrapper[4725]: I1014 13:30:15.440809 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-8jwrn" Oct 14 13:30:15 crc kubenswrapper[4725]: I1014 13:30:15.458169 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-96vkx" podStartSLOduration=3.828371656 podStartE2EDuration="35.458145659s" podCreationTimestamp="2025-10-14 13:29:40 +0000 UTC" firstStartedPulling="2025-10-14 13:29:42.925492751 +0000 UTC m=+899.773927560" lastFinishedPulling="2025-10-14 13:30:14.555266754 +0000 UTC m=+931.403701563" observedRunningTime="2025-10-14 13:30:15.451678561 +0000 UTC m=+932.300113400" watchObservedRunningTime="2025-10-14 13:30:15.458145659 +0000 UTC m=+932.306580478" Oct 14 13:30:15 crc kubenswrapper[4725]: I1014 13:30:15.472389 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-8jwrn" podStartSLOduration=3.657314759 podStartE2EDuration="35.472372971s" podCreationTimestamp="2025-10-14 13:29:40 +0000 UTC" firstStartedPulling="2025-10-14 13:29:42.911829365 +0000 UTC m=+899.760264184" lastFinishedPulling="2025-10-14 13:30:14.726887587 +0000 UTC m=+931.575322396" observedRunningTime="2025-10-14 13:30:15.467008393 +0000 UTC m=+932.315443212" watchObservedRunningTime="2025-10-14 13:30:15.472372971 +0000 UTC m=+932.320807780" Oct 14 13:30:21 crc kubenswrapper[4725]: I1014 13:30:21.247795 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-96vkx" Oct 14 13:30:21 crc kubenswrapper[4725]: I1014 13:30:21.263423 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-jgbh4" Oct 14 13:30:21 crc kubenswrapper[4725]: I1014 13:30:21.314736 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-tt2xm" Oct 14 13:30:21 crc kubenswrapper[4725]: I1014 13:30:21.331732 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-8jwrn" Oct 14 13:30:23 crc kubenswrapper[4725]: I1014 13:30:23.011794 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757d5s454" Oct 14 13:30:32 crc kubenswrapper[4725]: I1014 13:30:32.521489 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 13:30:32 crc kubenswrapper[4725]: I1014 13:30:32.521921 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 13:30:40 crc kubenswrapper[4725]: I1014 13:30:40.007033 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jshb9"] Oct 14 13:30:40 crc kubenswrapper[4725]: E1014 13:30:40.008163 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a18d61b0-f276-4eac-b1c6-bbbc679d5059" containerName="collect-profiles" Oct 14 13:30:40 crc kubenswrapper[4725]: I1014 13:30:40.008183 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="a18d61b0-f276-4eac-b1c6-bbbc679d5059" containerName="collect-profiles" Oct 14 13:30:40 crc kubenswrapper[4725]: I1014 13:30:40.008427 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="a18d61b0-f276-4eac-b1c6-bbbc679d5059" containerName="collect-profiles" Oct 14 13:30:40 crc kubenswrapper[4725]: I1014 13:30:40.009730 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-jshb9" Oct 14 13:30:40 crc kubenswrapper[4725]: I1014 13:30:40.011830 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 14 13:30:40 crc kubenswrapper[4725]: I1014 13:30:40.012161 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Oct 14 13:30:40 crc kubenswrapper[4725]: I1014 13:30:40.012326 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Oct 14 13:30:40 crc kubenswrapper[4725]: I1014 13:30:40.012495 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-mnj2c" Oct 14 13:30:40 crc kubenswrapper[4725]: I1014 13:30:40.018897 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jshb9"] Oct 14 13:30:40 crc kubenswrapper[4725]: I1014 13:30:40.071184 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-cd96f"] Oct 14 13:30:40 crc kubenswrapper[4725]: I1014 13:30:40.072632 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-cd96f" Oct 14 13:30:40 crc kubenswrapper[4725]: I1014 13:30:40.074534 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 14 13:30:40 crc kubenswrapper[4725]: I1014 13:30:40.093502 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-cd96f"] Oct 14 13:30:40 crc kubenswrapper[4725]: I1014 13:30:40.156159 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmfgt\" (UniqueName: \"kubernetes.io/projected/2c3f761a-b546-4e23-9bdd-ada6d0595b10-kube-api-access-jmfgt\") pod \"dnsmasq-dns-675f4bcbfc-jshb9\" (UID: \"2c3f761a-b546-4e23-9bdd-ada6d0595b10\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jshb9" Oct 14 13:30:40 crc kubenswrapper[4725]: I1014 13:30:40.156222 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c3f761a-b546-4e23-9bdd-ada6d0595b10-config\") pod \"dnsmasq-dns-675f4bcbfc-jshb9\" (UID: \"2c3f761a-b546-4e23-9bdd-ada6d0595b10\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jshb9" Oct 14 13:30:40 crc kubenswrapper[4725]: I1014 13:30:40.257033 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/797dd808-4202-4b79-a470-8023a6b859b0-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-cd96f\" (UID: \"797dd808-4202-4b79-a470-8023a6b859b0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-cd96f" Oct 14 13:30:40 crc kubenswrapper[4725]: I1014 13:30:40.257082 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/797dd808-4202-4b79-a470-8023a6b859b0-config\") pod \"dnsmasq-dns-78dd6ddcc-cd96f\" (UID: \"797dd808-4202-4b79-a470-8023a6b859b0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-cd96f" Oct 14 13:30:40 crc kubenswrapper[4725]: I1014 13:30:40.257124 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbvkf\" (UniqueName: \"kubernetes.io/projected/797dd808-4202-4b79-a470-8023a6b859b0-kube-api-access-fbvkf\") pod \"dnsmasq-dns-78dd6ddcc-cd96f\" (UID: \"797dd808-4202-4b79-a470-8023a6b859b0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-cd96f" Oct 14 13:30:40 crc kubenswrapper[4725]: I1014 13:30:40.257166 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmfgt\" (UniqueName: \"kubernetes.io/projected/2c3f761a-b546-4e23-9bdd-ada6d0595b10-kube-api-access-jmfgt\") pod \"dnsmasq-dns-675f4bcbfc-jshb9\" (UID: \"2c3f761a-b546-4e23-9bdd-ada6d0595b10\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jshb9" Oct 14 13:30:40 crc kubenswrapper[4725]: I1014 13:30:40.257215 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c3f761a-b546-4e23-9bdd-ada6d0595b10-config\") pod \"dnsmasq-dns-675f4bcbfc-jshb9\" (UID: \"2c3f761a-b546-4e23-9bdd-ada6d0595b10\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jshb9" Oct 14 13:30:40 crc kubenswrapper[4725]: I1014 13:30:40.258242 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c3f761a-b546-4e23-9bdd-ada6d0595b10-config\") pod \"dnsmasq-dns-675f4bcbfc-jshb9\" (UID: \"2c3f761a-b546-4e23-9bdd-ada6d0595b10\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jshb9" Oct 14 13:30:40 crc kubenswrapper[4725]: I1014 13:30:40.283306 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmfgt\" (UniqueName: \"kubernetes.io/projected/2c3f761a-b546-4e23-9bdd-ada6d0595b10-kube-api-access-jmfgt\") pod \"dnsmasq-dns-675f4bcbfc-jshb9\" (UID: \"2c3f761a-b546-4e23-9bdd-ada6d0595b10\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jshb9" Oct 14 13:30:40 crc kubenswrapper[4725]: I1014 13:30:40.332949 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-jshb9" Oct 14 13:30:40 crc kubenswrapper[4725]: I1014 13:30:40.358842 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/797dd808-4202-4b79-a470-8023a6b859b0-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-cd96f\" (UID: \"797dd808-4202-4b79-a470-8023a6b859b0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-cd96f" Oct 14 13:30:40 crc kubenswrapper[4725]: I1014 13:30:40.358890 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/797dd808-4202-4b79-a470-8023a6b859b0-config\") pod \"dnsmasq-dns-78dd6ddcc-cd96f\" (UID: \"797dd808-4202-4b79-a470-8023a6b859b0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-cd96f" Oct 14 13:30:40 crc kubenswrapper[4725]: I1014 13:30:40.358930 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbvkf\" (UniqueName: \"kubernetes.io/projected/797dd808-4202-4b79-a470-8023a6b859b0-kube-api-access-fbvkf\") pod \"dnsmasq-dns-78dd6ddcc-cd96f\" (UID: \"797dd808-4202-4b79-a470-8023a6b859b0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-cd96f" Oct 14 13:30:40 crc kubenswrapper[4725]: I1014 13:30:40.359901 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/797dd808-4202-4b79-a470-8023a6b859b0-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-cd96f\" (UID: \"797dd808-4202-4b79-a470-8023a6b859b0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-cd96f" Oct 14 13:30:40 crc kubenswrapper[4725]: I1014 13:30:40.360039 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/797dd808-4202-4b79-a470-8023a6b859b0-config\") pod \"dnsmasq-dns-78dd6ddcc-cd96f\" (UID: \"797dd808-4202-4b79-a470-8023a6b859b0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-cd96f" Oct 14 13:30:40 crc kubenswrapper[4725]: I1014 13:30:40.380234 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbvkf\" (UniqueName: \"kubernetes.io/projected/797dd808-4202-4b79-a470-8023a6b859b0-kube-api-access-fbvkf\") pod \"dnsmasq-dns-78dd6ddcc-cd96f\" (UID: \"797dd808-4202-4b79-a470-8023a6b859b0\") " pod="openstack/dnsmasq-dns-78dd6ddcc-cd96f" Oct 14 13:30:40 crc kubenswrapper[4725]: I1014 13:30:40.386384 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-cd96f" Oct 14 13:30:40 crc kubenswrapper[4725]: I1014 13:30:40.794777 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jshb9"] Oct 14 13:30:40 crc kubenswrapper[4725]: I1014 13:30:40.868842 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-cd96f"] Oct 14 13:30:40 crc kubenswrapper[4725]: W1014 13:30:40.868902 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod797dd808_4202_4b79_a470_8023a6b859b0.slice/crio-ebcf70262ee0093aea1356acb065037fac286de9dcced4dceb690fe9a497f40b WatchSource:0}: Error finding container ebcf70262ee0093aea1356acb065037fac286de9dcced4dceb690fe9a497f40b: Status 404 returned error can't find the container with id ebcf70262ee0093aea1356acb065037fac286de9dcced4dceb690fe9a497f40b Oct 14 13:30:41 crc kubenswrapper[4725]: I1014 13:30:41.683606 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-jshb9" event={"ID":"2c3f761a-b546-4e23-9bdd-ada6d0595b10","Type":"ContainerStarted","Data":"a27ab98f3e5f3d00a1990615afbfd8c3422ad557b6fc67ea7f2b83724425f603"} Oct 14 13:30:41 crc kubenswrapper[4725]: I1014 13:30:41.685606 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-cd96f" event={"ID":"797dd808-4202-4b79-a470-8023a6b859b0","Type":"ContainerStarted","Data":"ebcf70262ee0093aea1356acb065037fac286de9dcced4dceb690fe9a497f40b"} Oct 14 13:30:43 crc kubenswrapper[4725]: I1014 13:30:43.383091 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jshb9"] Oct 14 13:30:43 crc kubenswrapper[4725]: I1014 13:30:43.439663 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2srqh"] Oct 14 13:30:43 crc kubenswrapper[4725]: I1014 13:30:43.441618 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2srqh" Oct 14 13:30:43 crc kubenswrapper[4725]: I1014 13:30:43.451388 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2srqh"] Oct 14 13:30:43 crc kubenswrapper[4725]: I1014 13:30:43.616550 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd9edf73-e00e-4b3d-84ea-93201123d400-config\") pod \"dnsmasq-dns-666b6646f7-2srqh\" (UID: \"dd9edf73-e00e-4b3d-84ea-93201123d400\") " pod="openstack/dnsmasq-dns-666b6646f7-2srqh" Oct 14 13:30:43 crc kubenswrapper[4725]: I1014 13:30:43.616620 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd9edf73-e00e-4b3d-84ea-93201123d400-dns-svc\") pod \"dnsmasq-dns-666b6646f7-2srqh\" (UID: \"dd9edf73-e00e-4b3d-84ea-93201123d400\") " pod="openstack/dnsmasq-dns-666b6646f7-2srqh" Oct 14 13:30:43 crc kubenswrapper[4725]: I1014 13:30:43.616648 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-948p8\" (UniqueName: \"kubernetes.io/projected/dd9edf73-e00e-4b3d-84ea-93201123d400-kube-api-access-948p8\") pod \"dnsmasq-dns-666b6646f7-2srqh\" (UID: \"dd9edf73-e00e-4b3d-84ea-93201123d400\") " pod="openstack/dnsmasq-dns-666b6646f7-2srqh" Oct 14 13:30:43 crc kubenswrapper[4725]: I1014 13:30:43.723919 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd9edf73-e00e-4b3d-84ea-93201123d400-config\") pod \"dnsmasq-dns-666b6646f7-2srqh\" (UID: \"dd9edf73-e00e-4b3d-84ea-93201123d400\") " pod="openstack/dnsmasq-dns-666b6646f7-2srqh" Oct 14 13:30:43 crc kubenswrapper[4725]: I1014 13:30:43.723973 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd9edf73-e00e-4b3d-84ea-93201123d400-dns-svc\") pod \"dnsmasq-dns-666b6646f7-2srqh\" (UID: \"dd9edf73-e00e-4b3d-84ea-93201123d400\") " pod="openstack/dnsmasq-dns-666b6646f7-2srqh" Oct 14 13:30:43 crc kubenswrapper[4725]: I1014 13:30:43.723994 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-948p8\" (UniqueName: \"kubernetes.io/projected/dd9edf73-e00e-4b3d-84ea-93201123d400-kube-api-access-948p8\") pod \"dnsmasq-dns-666b6646f7-2srqh\" (UID: \"dd9edf73-e00e-4b3d-84ea-93201123d400\") " pod="openstack/dnsmasq-dns-666b6646f7-2srqh" Oct 14 13:30:43 crc kubenswrapper[4725]: I1014 13:30:43.725085 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd9edf73-e00e-4b3d-84ea-93201123d400-config\") pod \"dnsmasq-dns-666b6646f7-2srqh\" (UID: \"dd9edf73-e00e-4b3d-84ea-93201123d400\") " pod="openstack/dnsmasq-dns-666b6646f7-2srqh" Oct 14 13:30:43 crc kubenswrapper[4725]: I1014 13:30:43.725660 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd9edf73-e00e-4b3d-84ea-93201123d400-dns-svc\") pod \"dnsmasq-dns-666b6646f7-2srqh\" (UID: \"dd9edf73-e00e-4b3d-84ea-93201123d400\") " pod="openstack/dnsmasq-dns-666b6646f7-2srqh" Oct 14 13:30:43 crc kubenswrapper[4725]: I1014 13:30:43.738425 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-cd96f"] Oct 14 13:30:43 crc kubenswrapper[4725]: I1014 13:30:43.744039 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6tlzb"] Oct 14 13:30:43 crc kubenswrapper[4725]: I1014 13:30:43.745195 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-6tlzb" Oct 14 13:30:43 crc kubenswrapper[4725]: I1014 13:30:43.757887 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-948p8\" (UniqueName: \"kubernetes.io/projected/dd9edf73-e00e-4b3d-84ea-93201123d400-kube-api-access-948p8\") pod \"dnsmasq-dns-666b6646f7-2srqh\" (UID: \"dd9edf73-e00e-4b3d-84ea-93201123d400\") " pod="openstack/dnsmasq-dns-666b6646f7-2srqh" Oct 14 13:30:43 crc kubenswrapper[4725]: I1014 13:30:43.761444 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6tlzb"] Oct 14 13:30:43 crc kubenswrapper[4725]: I1014 13:30:43.776268 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2srqh" Oct 14 13:30:43 crc kubenswrapper[4725]: I1014 13:30:43.927134 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85c2073b-91ed-4a5a-a28c-f5c073f5e70e-config\") pod \"dnsmasq-dns-57d769cc4f-6tlzb\" (UID: \"85c2073b-91ed-4a5a-a28c-f5c073f5e70e\") " pod="openstack/dnsmasq-dns-57d769cc4f-6tlzb" Oct 14 13:30:43 crc kubenswrapper[4725]: I1014 13:30:43.927502 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnrrm\" (UniqueName: \"kubernetes.io/projected/85c2073b-91ed-4a5a-a28c-f5c073f5e70e-kube-api-access-jnrrm\") pod \"dnsmasq-dns-57d769cc4f-6tlzb\" (UID: \"85c2073b-91ed-4a5a-a28c-f5c073f5e70e\") " pod="openstack/dnsmasq-dns-57d769cc4f-6tlzb" Oct 14 13:30:43 crc kubenswrapper[4725]: I1014 13:30:43.927536 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85c2073b-91ed-4a5a-a28c-f5c073f5e70e-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-6tlzb\" (UID: \"85c2073b-91ed-4a5a-a28c-f5c073f5e70e\") " pod="openstack/dnsmasq-dns-57d769cc4f-6tlzb" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.029917 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnrrm\" (UniqueName: \"kubernetes.io/projected/85c2073b-91ed-4a5a-a28c-f5c073f5e70e-kube-api-access-jnrrm\") pod \"dnsmasq-dns-57d769cc4f-6tlzb\" (UID: \"85c2073b-91ed-4a5a-a28c-f5c073f5e70e\") " pod="openstack/dnsmasq-dns-57d769cc4f-6tlzb" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.029975 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85c2073b-91ed-4a5a-a28c-f5c073f5e70e-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-6tlzb\" (UID: \"85c2073b-91ed-4a5a-a28c-f5c073f5e70e\") " pod="openstack/dnsmasq-dns-57d769cc4f-6tlzb" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.030142 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85c2073b-91ed-4a5a-a28c-f5c073f5e70e-config\") pod \"dnsmasq-dns-57d769cc4f-6tlzb\" (UID: \"85c2073b-91ed-4a5a-a28c-f5c073f5e70e\") " pod="openstack/dnsmasq-dns-57d769cc4f-6tlzb" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.032483 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85c2073b-91ed-4a5a-a28c-f5c073f5e70e-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-6tlzb\" (UID: \"85c2073b-91ed-4a5a-a28c-f5c073f5e70e\") " pod="openstack/dnsmasq-dns-57d769cc4f-6tlzb" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.037367 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85c2073b-91ed-4a5a-a28c-f5c073f5e70e-config\") pod \"dnsmasq-dns-57d769cc4f-6tlzb\" (UID: \"85c2073b-91ed-4a5a-a28c-f5c073f5e70e\") " pod="openstack/dnsmasq-dns-57d769cc4f-6tlzb" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.068624 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnrrm\" (UniqueName: \"kubernetes.io/projected/85c2073b-91ed-4a5a-a28c-f5c073f5e70e-kube-api-access-jnrrm\") pod \"dnsmasq-dns-57d769cc4f-6tlzb\" (UID: \"85c2073b-91ed-4a5a-a28c-f5c073f5e70e\") " pod="openstack/dnsmasq-dns-57d769cc4f-6tlzb" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.105223 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-6tlzb" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.304618 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2srqh"] Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.373775 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.572072 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.574116 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.576782 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-d59t8" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.576966 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.577072 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.577114 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.581524 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.582860 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.583342 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.591504 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.637392 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6tlzb"] Oct 14 13:30:44 crc kubenswrapper[4725]: W1014 13:30:44.654607 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85c2073b_91ed_4a5a_a28c_f5c073f5e70e.slice/crio-03887b87a5232fa61468ace1642dd4667c071858a0b97bacd4535176e68ec37a WatchSource:0}: Error finding container 03887b87a5232fa61468ace1642dd4667c071858a0b97bacd4535176e68ec37a: Status 404 returned error can't find the container with id 03887b87a5232fa61468ace1642dd4667c071858a0b97bacd4535176e68ec37a Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.721579 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-6tlzb" event={"ID":"85c2073b-91ed-4a5a-a28c-f5c073f5e70e","Type":"ContainerStarted","Data":"03887b87a5232fa61468ace1642dd4667c071858a0b97bacd4535176e68ec37a"} Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.723541 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2srqh" event={"ID":"dd9edf73-e00e-4b3d-84ea-93201123d400","Type":"ContainerStarted","Data":"33a7a5d468119e1a3e6ca620fd8132ee08eedf95a7fc4c3ae35f6641b37ced8b"} Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.742831 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.742919 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.742962 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.742987 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.743010 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.743029 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8wp7\" (UniqueName: \"kubernetes.io/projected/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-kube-api-access-d8wp7\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.743055 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.743080 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.743109 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.743126 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-config-data\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.743147 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.844663 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.844970 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.844989 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-config-data\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.845013 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.845031 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.845067 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.845101 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.845129 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.845151 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.845170 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8wp7\" (UniqueName: \"kubernetes.io/projected/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-kube-api-access-d8wp7\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.845190 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.845522 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.846025 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-config-data\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.846083 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.846536 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.846984 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.848074 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.852752 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.853119 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.853606 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.866058 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.879404 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8wp7\" (UniqueName: \"kubernetes.io/projected/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-kube-api-access-d8wp7\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.899851 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " pod="openstack/rabbitmq-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.916236 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.929409 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.935801 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.935821 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.935993 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.936089 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-zgkdc" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.936215 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.936370 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.936491 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 14 13:30:44 crc kubenswrapper[4725]: I1014 13:30:44.941121 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.047176 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e690ed1d-b1fe-48b5-817c-d512cef45181-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.047233 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.047258 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e690ed1d-b1fe-48b5-817c-d512cef45181-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.047498 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e690ed1d-b1fe-48b5-817c-d512cef45181-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.047556 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7885\" (UniqueName: \"kubernetes.io/projected/e690ed1d-b1fe-48b5-817c-d512cef45181-kube-api-access-l7885\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.047662 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e690ed1d-b1fe-48b5-817c-d512cef45181-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.047712 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e690ed1d-b1fe-48b5-817c-d512cef45181-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.047766 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e690ed1d-b1fe-48b5-817c-d512cef45181-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.047787 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e690ed1d-b1fe-48b5-817c-d512cef45181-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.047830 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e690ed1d-b1fe-48b5-817c-d512cef45181-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.047883 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e690ed1d-b1fe-48b5-817c-d512cef45181-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.149491 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e690ed1d-b1fe-48b5-817c-d512cef45181-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.149575 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7885\" (UniqueName: \"kubernetes.io/projected/e690ed1d-b1fe-48b5-817c-d512cef45181-kube-api-access-l7885\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.149624 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e690ed1d-b1fe-48b5-817c-d512cef45181-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.149711 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e690ed1d-b1fe-48b5-817c-d512cef45181-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.149749 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e690ed1d-b1fe-48b5-817c-d512cef45181-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.149778 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e690ed1d-b1fe-48b5-817c-d512cef45181-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.149812 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e690ed1d-b1fe-48b5-817c-d512cef45181-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.149854 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e690ed1d-b1fe-48b5-817c-d512cef45181-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.149907 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e690ed1d-b1fe-48b5-817c-d512cef45181-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.149933 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.149955 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e690ed1d-b1fe-48b5-817c-d512cef45181-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.150186 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e690ed1d-b1fe-48b5-817c-d512cef45181-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.150308 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e690ed1d-b1fe-48b5-817c-d512cef45181-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.150333 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.151047 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e690ed1d-b1fe-48b5-817c-d512cef45181-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.151373 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e690ed1d-b1fe-48b5-817c-d512cef45181-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.153904 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e690ed1d-b1fe-48b5-817c-d512cef45181-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.164956 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e690ed1d-b1fe-48b5-817c-d512cef45181-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.165260 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e690ed1d-b1fe-48b5-817c-d512cef45181-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.172288 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e690ed1d-b1fe-48b5-817c-d512cef45181-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.173554 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e690ed1d-b1fe-48b5-817c-d512cef45181-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.195118 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.198271 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7885\" (UniqueName: \"kubernetes.io/projected/e690ed1d-b1fe-48b5-817c-d512cef45181-kube-api-access-l7885\") pod \"rabbitmq-cell1-server-0\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.198875 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.285776 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.700469 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 13:30:45 crc kubenswrapper[4725]: W1014 13:30:45.726720 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc3270b7_85d3_4e11_a5e1_d9e42f8b876c.slice/crio-7d73c1343f22fb6bff8d679a6d982d7b28ff6b5dbd0736eb6864fb97044b2d18 WatchSource:0}: Error finding container 7d73c1343f22fb6bff8d679a6d982d7b28ff6b5dbd0736eb6864fb97044b2d18: Status 404 returned error can't find the container with id 7d73c1343f22fb6bff8d679a6d982d7b28ff6b5dbd0736eb6864fb97044b2d18 Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.744467 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c","Type":"ContainerStarted","Data":"7d73c1343f22fb6bff8d679a6d982d7b28ff6b5dbd0736eb6864fb97044b2d18"} Oct 14 13:30:45 crc kubenswrapper[4725]: I1014 13:30:45.819422 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.179347 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.191023 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.194323 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.205771 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-qshr4" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.206126 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.209504 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.209640 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.209971 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.212720 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.291713 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"b72e8db7-f91f-41b1-95bd-366cd156f5ed\") " pod="openstack/openstack-galera-0" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.291769 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b72e8db7-f91f-41b1-95bd-366cd156f5ed-config-data-default\") pod \"openstack-galera-0\" (UID: \"b72e8db7-f91f-41b1-95bd-366cd156f5ed\") " pod="openstack/openstack-galera-0" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.291809 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b72e8db7-f91f-41b1-95bd-366cd156f5ed-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b72e8db7-f91f-41b1-95bd-366cd156f5ed\") " pod="openstack/openstack-galera-0" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.291839 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b72e8db7-f91f-41b1-95bd-366cd156f5ed-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b72e8db7-f91f-41b1-95bd-366cd156f5ed\") " pod="openstack/openstack-galera-0" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.291864 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b72e8db7-f91f-41b1-95bd-366cd156f5ed-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b72e8db7-f91f-41b1-95bd-366cd156f5ed\") " pod="openstack/openstack-galera-0" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.291887 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b72e8db7-f91f-41b1-95bd-366cd156f5ed-kolla-config\") pod \"openstack-galera-0\" (UID: \"b72e8db7-f91f-41b1-95bd-366cd156f5ed\") " pod="openstack/openstack-galera-0" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.291935 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/b72e8db7-f91f-41b1-95bd-366cd156f5ed-secrets\") pod \"openstack-galera-0\" (UID: \"b72e8db7-f91f-41b1-95bd-366cd156f5ed\") " pod="openstack/openstack-galera-0" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.291974 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b72e8db7-f91f-41b1-95bd-366cd156f5ed-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b72e8db7-f91f-41b1-95bd-366cd156f5ed\") " pod="openstack/openstack-galera-0" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.292008 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgv25\" (UniqueName: \"kubernetes.io/projected/b72e8db7-f91f-41b1-95bd-366cd156f5ed-kube-api-access-bgv25\") pod \"openstack-galera-0\" (UID: \"b72e8db7-f91f-41b1-95bd-366cd156f5ed\") " pod="openstack/openstack-galera-0" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.392954 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgv25\" (UniqueName: \"kubernetes.io/projected/b72e8db7-f91f-41b1-95bd-366cd156f5ed-kube-api-access-bgv25\") pod \"openstack-galera-0\" (UID: \"b72e8db7-f91f-41b1-95bd-366cd156f5ed\") " pod="openstack/openstack-galera-0" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.393089 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"b72e8db7-f91f-41b1-95bd-366cd156f5ed\") " pod="openstack/openstack-galera-0" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.393123 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b72e8db7-f91f-41b1-95bd-366cd156f5ed-config-data-default\") pod \"openstack-galera-0\" (UID: \"b72e8db7-f91f-41b1-95bd-366cd156f5ed\") " pod="openstack/openstack-galera-0" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.393149 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b72e8db7-f91f-41b1-95bd-366cd156f5ed-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b72e8db7-f91f-41b1-95bd-366cd156f5ed\") " pod="openstack/openstack-galera-0" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.393188 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b72e8db7-f91f-41b1-95bd-366cd156f5ed-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b72e8db7-f91f-41b1-95bd-366cd156f5ed\") " pod="openstack/openstack-galera-0" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.393209 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b72e8db7-f91f-41b1-95bd-366cd156f5ed-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b72e8db7-f91f-41b1-95bd-366cd156f5ed\") " pod="openstack/openstack-galera-0" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.393231 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b72e8db7-f91f-41b1-95bd-366cd156f5ed-kolla-config\") pod \"openstack-galera-0\" (UID: \"b72e8db7-f91f-41b1-95bd-366cd156f5ed\") " pod="openstack/openstack-galera-0" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.393267 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/b72e8db7-f91f-41b1-95bd-366cd156f5ed-secrets\") pod \"openstack-galera-0\" (UID: \"b72e8db7-f91f-41b1-95bd-366cd156f5ed\") " pod="openstack/openstack-galera-0" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.393296 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b72e8db7-f91f-41b1-95bd-366cd156f5ed-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b72e8db7-f91f-41b1-95bd-366cd156f5ed\") " pod="openstack/openstack-galera-0" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.393805 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b72e8db7-f91f-41b1-95bd-366cd156f5ed-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b72e8db7-f91f-41b1-95bd-366cd156f5ed\") " pod="openstack/openstack-galera-0" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.393866 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"b72e8db7-f91f-41b1-95bd-366cd156f5ed\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-galera-0" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.395068 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b72e8db7-f91f-41b1-95bd-366cd156f5ed-kolla-config\") pod \"openstack-galera-0\" (UID: \"b72e8db7-f91f-41b1-95bd-366cd156f5ed\") " pod="openstack/openstack-galera-0" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.396022 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b72e8db7-f91f-41b1-95bd-366cd156f5ed-config-data-default\") pod \"openstack-galera-0\" (UID: \"b72e8db7-f91f-41b1-95bd-366cd156f5ed\") " pod="openstack/openstack-galera-0" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.397779 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b72e8db7-f91f-41b1-95bd-366cd156f5ed-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b72e8db7-f91f-41b1-95bd-366cd156f5ed\") " pod="openstack/openstack-galera-0" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.409010 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b72e8db7-f91f-41b1-95bd-366cd156f5ed-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b72e8db7-f91f-41b1-95bd-366cd156f5ed\") " pod="openstack/openstack-galera-0" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.414791 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/b72e8db7-f91f-41b1-95bd-366cd156f5ed-secrets\") pod \"openstack-galera-0\" (UID: \"b72e8db7-f91f-41b1-95bd-366cd156f5ed\") " pod="openstack/openstack-galera-0" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.419662 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b72e8db7-f91f-41b1-95bd-366cd156f5ed-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b72e8db7-f91f-41b1-95bd-366cd156f5ed\") " pod="openstack/openstack-galera-0" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.427480 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgv25\" (UniqueName: \"kubernetes.io/projected/b72e8db7-f91f-41b1-95bd-366cd156f5ed-kube-api-access-bgv25\") pod \"openstack-galera-0\" (UID: \"b72e8db7-f91f-41b1-95bd-366cd156f5ed\") " pod="openstack/openstack-galera-0" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.454847 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"b72e8db7-f91f-41b1-95bd-366cd156f5ed\") " pod="openstack/openstack-galera-0" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.529805 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 14 13:30:46 crc kubenswrapper[4725]: I1014 13:30:46.758589 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e690ed1d-b1fe-48b5-817c-d512cef45181","Type":"ContainerStarted","Data":"cdaac290a0bdf527579940451a43a2bc97038b579e5fcecbb84f97c28b05643b"} Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.530116 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.531632 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.579751 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.579761 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.579873 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.579777 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-74wvp" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.609148 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.612912 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"697b603d-cd65-466b-930a-86c43c8483ba\") " pod="openstack/openstack-cell1-galera-0" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.612952 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/697b603d-cd65-466b-930a-86c43c8483ba-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"697b603d-cd65-466b-930a-86c43c8483ba\") " pod="openstack/openstack-cell1-galera-0" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.613001 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/697b603d-cd65-466b-930a-86c43c8483ba-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"697b603d-cd65-466b-930a-86c43c8483ba\") " pod="openstack/openstack-cell1-galera-0" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.613021 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/697b603d-cd65-466b-930a-86c43c8483ba-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"697b603d-cd65-466b-930a-86c43c8483ba\") " pod="openstack/openstack-cell1-galera-0" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.613063 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/697b603d-cd65-466b-930a-86c43c8483ba-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"697b603d-cd65-466b-930a-86c43c8483ba\") " pod="openstack/openstack-cell1-galera-0" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.613085 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/697b603d-cd65-466b-930a-86c43c8483ba-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"697b603d-cd65-466b-930a-86c43c8483ba\") " pod="openstack/openstack-cell1-galera-0" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.613117 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m77d\" (UniqueName: \"kubernetes.io/projected/697b603d-cd65-466b-930a-86c43c8483ba-kube-api-access-2m77d\") pod \"openstack-cell1-galera-0\" (UID: \"697b603d-cd65-466b-930a-86c43c8483ba\") " pod="openstack/openstack-cell1-galera-0" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.613136 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/697b603d-cd65-466b-930a-86c43c8483ba-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"697b603d-cd65-466b-930a-86c43c8483ba\") " pod="openstack/openstack-cell1-galera-0" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.613154 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/697b603d-cd65-466b-930a-86c43c8483ba-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"697b603d-cd65-466b-930a-86c43c8483ba\") " pod="openstack/openstack-cell1-galera-0" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.715223 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/697b603d-cd65-466b-930a-86c43c8483ba-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"697b603d-cd65-466b-930a-86c43c8483ba\") " pod="openstack/openstack-cell1-galera-0" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.715303 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/697b603d-cd65-466b-930a-86c43c8483ba-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"697b603d-cd65-466b-930a-86c43c8483ba\") " pod="openstack/openstack-cell1-galera-0" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.715340 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m77d\" (UniqueName: \"kubernetes.io/projected/697b603d-cd65-466b-930a-86c43c8483ba-kube-api-access-2m77d\") pod \"openstack-cell1-galera-0\" (UID: \"697b603d-cd65-466b-930a-86c43c8483ba\") " pod="openstack/openstack-cell1-galera-0" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.715387 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/697b603d-cd65-466b-930a-86c43c8483ba-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"697b603d-cd65-466b-930a-86c43c8483ba\") " pod="openstack/openstack-cell1-galera-0" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.715410 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/697b603d-cd65-466b-930a-86c43c8483ba-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"697b603d-cd65-466b-930a-86c43c8483ba\") " pod="openstack/openstack-cell1-galera-0" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.715477 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"697b603d-cd65-466b-930a-86c43c8483ba\") " pod="openstack/openstack-cell1-galera-0" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.715511 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/697b603d-cd65-466b-930a-86c43c8483ba-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"697b603d-cd65-466b-930a-86c43c8483ba\") " pod="openstack/openstack-cell1-galera-0" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.715580 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/697b603d-cd65-466b-930a-86c43c8483ba-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"697b603d-cd65-466b-930a-86c43c8483ba\") " pod="openstack/openstack-cell1-galera-0" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.715600 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/697b603d-cd65-466b-930a-86c43c8483ba-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"697b603d-cd65-466b-930a-86c43c8483ba\") " pod="openstack/openstack-cell1-galera-0" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.719252 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"697b603d-cd65-466b-930a-86c43c8483ba\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-cell1-galera-0" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.720560 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/697b603d-cd65-466b-930a-86c43c8483ba-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"697b603d-cd65-466b-930a-86c43c8483ba\") " pod="openstack/openstack-cell1-galera-0" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.720636 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/697b603d-cd65-466b-930a-86c43c8483ba-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"697b603d-cd65-466b-930a-86c43c8483ba\") " pod="openstack/openstack-cell1-galera-0" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.720873 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/697b603d-cd65-466b-930a-86c43c8483ba-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"697b603d-cd65-466b-930a-86c43c8483ba\") " pod="openstack/openstack-cell1-galera-0" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.720994 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/697b603d-cd65-466b-930a-86c43c8483ba-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"697b603d-cd65-466b-930a-86c43c8483ba\") " pod="openstack/openstack-cell1-galera-0" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.723622 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/697b603d-cd65-466b-930a-86c43c8483ba-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"697b603d-cd65-466b-930a-86c43c8483ba\") " pod="openstack/openstack-cell1-galera-0" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.728681 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/697b603d-cd65-466b-930a-86c43c8483ba-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"697b603d-cd65-466b-930a-86c43c8483ba\") " pod="openstack/openstack-cell1-galera-0" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.735887 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/697b603d-cd65-466b-930a-86c43c8483ba-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"697b603d-cd65-466b-930a-86c43c8483ba\") " pod="openstack/openstack-cell1-galera-0" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.750234 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m77d\" (UniqueName: \"kubernetes.io/projected/697b603d-cd65-466b-930a-86c43c8483ba-kube-api-access-2m77d\") pod \"openstack-cell1-galera-0\" (UID: \"697b603d-cd65-466b-930a-86c43c8483ba\") " pod="openstack/openstack-cell1-galera-0" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.767820 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"697b603d-cd65-466b-930a-86c43c8483ba\") " pod="openstack/openstack-cell1-galera-0" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.906194 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.915771 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.917627 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.923820 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-qn49l" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.924093 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.924253 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Oct 14 13:30:47 crc kubenswrapper[4725]: I1014 13:30:47.960630 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 14 13:30:48 crc kubenswrapper[4725]: I1014 13:30:48.020066 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/166bc2c3-283f-4c1d-815b-54fffa8192d5-combined-ca-bundle\") pod \"memcached-0\" (UID: \"166bc2c3-283f-4c1d-815b-54fffa8192d5\") " pod="openstack/memcached-0" Oct 14 13:30:48 crc kubenswrapper[4725]: I1014 13:30:48.020907 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/166bc2c3-283f-4c1d-815b-54fffa8192d5-kolla-config\") pod \"memcached-0\" (UID: \"166bc2c3-283f-4c1d-815b-54fffa8192d5\") " pod="openstack/memcached-0" Oct 14 13:30:48 crc kubenswrapper[4725]: I1014 13:30:48.021322 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/166bc2c3-283f-4c1d-815b-54fffa8192d5-memcached-tls-certs\") pod \"memcached-0\" (UID: \"166bc2c3-283f-4c1d-815b-54fffa8192d5\") " pod="openstack/memcached-0" Oct 14 13:30:48 crc kubenswrapper[4725]: I1014 13:30:48.021413 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/166bc2c3-283f-4c1d-815b-54fffa8192d5-config-data\") pod \"memcached-0\" (UID: \"166bc2c3-283f-4c1d-815b-54fffa8192d5\") " pod="openstack/memcached-0" Oct 14 13:30:48 crc kubenswrapper[4725]: I1014 13:30:48.021473 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkfxw\" (UniqueName: \"kubernetes.io/projected/166bc2c3-283f-4c1d-815b-54fffa8192d5-kube-api-access-bkfxw\") pod \"memcached-0\" (UID: \"166bc2c3-283f-4c1d-815b-54fffa8192d5\") " pod="openstack/memcached-0" Oct 14 13:30:48 crc kubenswrapper[4725]: I1014 13:30:48.129651 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/166bc2c3-283f-4c1d-815b-54fffa8192d5-kolla-config\") pod \"memcached-0\" (UID: \"166bc2c3-283f-4c1d-815b-54fffa8192d5\") " pod="openstack/memcached-0" Oct 14 13:30:48 crc kubenswrapper[4725]: I1014 13:30:48.129758 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/166bc2c3-283f-4c1d-815b-54fffa8192d5-memcached-tls-certs\") pod \"memcached-0\" (UID: \"166bc2c3-283f-4c1d-815b-54fffa8192d5\") " pod="openstack/memcached-0" Oct 14 13:30:48 crc kubenswrapper[4725]: I1014 13:30:48.129790 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/166bc2c3-283f-4c1d-815b-54fffa8192d5-config-data\") pod \"memcached-0\" (UID: \"166bc2c3-283f-4c1d-815b-54fffa8192d5\") " pod="openstack/memcached-0" Oct 14 13:30:48 crc kubenswrapper[4725]: I1014 13:30:48.129836 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkfxw\" (UniqueName: \"kubernetes.io/projected/166bc2c3-283f-4c1d-815b-54fffa8192d5-kube-api-access-bkfxw\") pod \"memcached-0\" (UID: \"166bc2c3-283f-4c1d-815b-54fffa8192d5\") " pod="openstack/memcached-0" Oct 14 13:30:48 crc kubenswrapper[4725]: I1014 13:30:48.129948 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/166bc2c3-283f-4c1d-815b-54fffa8192d5-combined-ca-bundle\") pod \"memcached-0\" (UID: \"166bc2c3-283f-4c1d-815b-54fffa8192d5\") " pod="openstack/memcached-0" Oct 14 13:30:48 crc kubenswrapper[4725]: I1014 13:30:48.130901 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/166bc2c3-283f-4c1d-815b-54fffa8192d5-config-data\") pod \"memcached-0\" (UID: \"166bc2c3-283f-4c1d-815b-54fffa8192d5\") " pod="openstack/memcached-0" Oct 14 13:30:48 crc kubenswrapper[4725]: I1014 13:30:48.130952 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/166bc2c3-283f-4c1d-815b-54fffa8192d5-kolla-config\") pod \"memcached-0\" (UID: \"166bc2c3-283f-4c1d-815b-54fffa8192d5\") " pod="openstack/memcached-0" Oct 14 13:30:48 crc kubenswrapper[4725]: I1014 13:30:48.135879 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/166bc2c3-283f-4c1d-815b-54fffa8192d5-memcached-tls-certs\") pod \"memcached-0\" (UID: \"166bc2c3-283f-4c1d-815b-54fffa8192d5\") " pod="openstack/memcached-0" Oct 14 13:30:48 crc kubenswrapper[4725]: I1014 13:30:48.168385 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/166bc2c3-283f-4c1d-815b-54fffa8192d5-combined-ca-bundle\") pod \"memcached-0\" (UID: \"166bc2c3-283f-4c1d-815b-54fffa8192d5\") " pod="openstack/memcached-0" Oct 14 13:30:48 crc kubenswrapper[4725]: I1014 13:30:48.168569 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkfxw\" (UniqueName: \"kubernetes.io/projected/166bc2c3-283f-4c1d-815b-54fffa8192d5-kube-api-access-bkfxw\") pod \"memcached-0\" (UID: \"166bc2c3-283f-4c1d-815b-54fffa8192d5\") " pod="openstack/memcached-0" Oct 14 13:30:48 crc kubenswrapper[4725]: I1014 13:30:48.260679 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 14 13:30:49 crc kubenswrapper[4725]: I1014 13:30:49.721891 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 13:30:49 crc kubenswrapper[4725]: I1014 13:30:49.724714 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 14 13:30:49 crc kubenswrapper[4725]: I1014 13:30:49.728818 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-l2nxl" Oct 14 13:30:49 crc kubenswrapper[4725]: I1014 13:30:49.731797 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 13:30:49 crc kubenswrapper[4725]: I1014 13:30:49.759218 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhs8m\" (UniqueName: \"kubernetes.io/projected/b08afeea-2257-428e-be50-e2cf1b5cc67e-kube-api-access-rhs8m\") pod \"kube-state-metrics-0\" (UID: \"b08afeea-2257-428e-be50-e2cf1b5cc67e\") " pod="openstack/kube-state-metrics-0" Oct 14 13:30:49 crc kubenswrapper[4725]: I1014 13:30:49.860251 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhs8m\" (UniqueName: \"kubernetes.io/projected/b08afeea-2257-428e-be50-e2cf1b5cc67e-kube-api-access-rhs8m\") pod \"kube-state-metrics-0\" (UID: \"b08afeea-2257-428e-be50-e2cf1b5cc67e\") " pod="openstack/kube-state-metrics-0" Oct 14 13:30:49 crc kubenswrapper[4725]: I1014 13:30:49.878425 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhs8m\" (UniqueName: \"kubernetes.io/projected/b08afeea-2257-428e-be50-e2cf1b5cc67e-kube-api-access-rhs8m\") pod \"kube-state-metrics-0\" (UID: \"b08afeea-2257-428e-be50-e2cf1b5cc67e\") " pod="openstack/kube-state-metrics-0" Oct 14 13:30:50 crc kubenswrapper[4725]: I1014 13:30:50.043623 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 14 13:30:53 crc kubenswrapper[4725]: I1014 13:30:53.781531 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 14 13:30:53 crc kubenswrapper[4725]: I1014 13:30:53.787722 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 14 13:30:53 crc kubenswrapper[4725]: I1014 13:30:53.790971 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-gnzlh" Oct 14 13:30:53 crc kubenswrapper[4725]: I1014 13:30:53.791275 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Oct 14 13:30:53 crc kubenswrapper[4725]: I1014 13:30:53.791397 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Oct 14 13:30:53 crc kubenswrapper[4725]: I1014 13:30:53.793707 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Oct 14 13:30:53 crc kubenswrapper[4725]: I1014 13:30:53.799179 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 14 13:30:53 crc kubenswrapper[4725]: I1014 13:30:53.800130 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Oct 14 13:30:53 crc kubenswrapper[4725]: I1014 13:30:53.823602 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb174fae-72d3-45b5-b008-80b4fa482f1e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"cb174fae-72d3-45b5-b008-80b4fa482f1e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 13:30:53 crc kubenswrapper[4725]: I1014 13:30:53.823664 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcl64\" (UniqueName: \"kubernetes.io/projected/cb174fae-72d3-45b5-b008-80b4fa482f1e-kube-api-access-vcl64\") pod \"ovsdbserver-nb-0\" (UID: \"cb174fae-72d3-45b5-b008-80b4fa482f1e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 13:30:53 crc kubenswrapper[4725]: I1014 13:30:53.823728 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cb174fae-72d3-45b5-b008-80b4fa482f1e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"cb174fae-72d3-45b5-b008-80b4fa482f1e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 13:30:53 crc kubenswrapper[4725]: I1014 13:30:53.823758 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb174fae-72d3-45b5-b008-80b4fa482f1e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cb174fae-72d3-45b5-b008-80b4fa482f1e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 13:30:53 crc kubenswrapper[4725]: I1014 13:30:53.823803 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb174fae-72d3-45b5-b008-80b4fa482f1e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"cb174fae-72d3-45b5-b008-80b4fa482f1e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 13:30:53 crc kubenswrapper[4725]: I1014 13:30:53.823839 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb174fae-72d3-45b5-b008-80b4fa482f1e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cb174fae-72d3-45b5-b008-80b4fa482f1e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 13:30:53 crc kubenswrapper[4725]: I1014 13:30:53.823875 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb174fae-72d3-45b5-b008-80b4fa482f1e-config\") pod \"ovsdbserver-nb-0\" (UID: \"cb174fae-72d3-45b5-b008-80b4fa482f1e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 13:30:53 crc kubenswrapper[4725]: I1014 13:30:53.823894 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"cb174fae-72d3-45b5-b008-80b4fa482f1e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 13:30:53 crc kubenswrapper[4725]: I1014 13:30:53.924986 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb174fae-72d3-45b5-b008-80b4fa482f1e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"cb174fae-72d3-45b5-b008-80b4fa482f1e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 13:30:53 crc kubenswrapper[4725]: I1014 13:30:53.925038 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcl64\" (UniqueName: \"kubernetes.io/projected/cb174fae-72d3-45b5-b008-80b4fa482f1e-kube-api-access-vcl64\") pod \"ovsdbserver-nb-0\" (UID: \"cb174fae-72d3-45b5-b008-80b4fa482f1e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 13:30:53 crc kubenswrapper[4725]: I1014 13:30:53.925083 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cb174fae-72d3-45b5-b008-80b4fa482f1e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"cb174fae-72d3-45b5-b008-80b4fa482f1e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 13:30:53 crc kubenswrapper[4725]: I1014 13:30:53.925110 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb174fae-72d3-45b5-b008-80b4fa482f1e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cb174fae-72d3-45b5-b008-80b4fa482f1e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 13:30:53 crc kubenswrapper[4725]: I1014 13:30:53.925137 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb174fae-72d3-45b5-b008-80b4fa482f1e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"cb174fae-72d3-45b5-b008-80b4fa482f1e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 13:30:53 crc kubenswrapper[4725]: I1014 13:30:53.925172 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb174fae-72d3-45b5-b008-80b4fa482f1e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cb174fae-72d3-45b5-b008-80b4fa482f1e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 13:30:53 crc kubenswrapper[4725]: I1014 13:30:53.925189 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb174fae-72d3-45b5-b008-80b4fa482f1e-config\") pod \"ovsdbserver-nb-0\" (UID: \"cb174fae-72d3-45b5-b008-80b4fa482f1e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 13:30:53 crc kubenswrapper[4725]: I1014 13:30:53.925206 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"cb174fae-72d3-45b5-b008-80b4fa482f1e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 13:30:53 crc kubenswrapper[4725]: I1014 13:30:53.925518 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"cb174fae-72d3-45b5-b008-80b4fa482f1e\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-nb-0" Oct 14 13:30:53 crc kubenswrapper[4725]: I1014 13:30:53.926766 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cb174fae-72d3-45b5-b008-80b4fa482f1e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"cb174fae-72d3-45b5-b008-80b4fa482f1e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 13:30:53 crc kubenswrapper[4725]: I1014 13:30:53.927742 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb174fae-72d3-45b5-b008-80b4fa482f1e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"cb174fae-72d3-45b5-b008-80b4fa482f1e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 13:30:53 crc kubenswrapper[4725]: I1014 13:30:53.931366 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb174fae-72d3-45b5-b008-80b4fa482f1e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cb174fae-72d3-45b5-b008-80b4fa482f1e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 13:30:53 crc kubenswrapper[4725]: I1014 13:30:53.932668 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb174fae-72d3-45b5-b008-80b4fa482f1e-config\") pod \"ovsdbserver-nb-0\" (UID: \"cb174fae-72d3-45b5-b008-80b4fa482f1e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 13:30:53 crc kubenswrapper[4725]: I1014 13:30:53.937170 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb174fae-72d3-45b5-b008-80b4fa482f1e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"cb174fae-72d3-45b5-b008-80b4fa482f1e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 13:30:53 crc kubenswrapper[4725]: I1014 13:30:53.940396 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb174fae-72d3-45b5-b008-80b4fa482f1e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"cb174fae-72d3-45b5-b008-80b4fa482f1e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 13:30:53 crc kubenswrapper[4725]: I1014 13:30:53.946829 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcl64\" (UniqueName: \"kubernetes.io/projected/cb174fae-72d3-45b5-b008-80b4fa482f1e-kube-api-access-vcl64\") pod \"ovsdbserver-nb-0\" (UID: \"cb174fae-72d3-45b5-b008-80b4fa482f1e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 13:30:53 crc kubenswrapper[4725]: I1014 13:30:53.947735 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"cb174fae-72d3-45b5-b008-80b4fa482f1e\") " pod="openstack/ovsdbserver-nb-0" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.118192 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.442872 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-x2x2g"] Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.444392 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x2x2g" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.447074 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-g2b8t" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.447135 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.447333 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.449497 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-x2x2g"] Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.479042 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-mbwcn"] Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.487827 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-mbwcn" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.515308 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-mbwcn"] Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.533741 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/02b8ba43-5172-4bac-ac99-104ddf5aea0f-var-log\") pod \"ovn-controller-ovs-mbwcn\" (UID: \"02b8ba43-5172-4bac-ac99-104ddf5aea0f\") " pod="openstack/ovn-controller-ovs-mbwcn" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.534075 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krzk5\" (UniqueName: \"kubernetes.io/projected/19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c-kube-api-access-krzk5\") pod \"ovn-controller-x2x2g\" (UID: \"19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c\") " pod="openstack/ovn-controller-x2x2g" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.534166 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/02b8ba43-5172-4bac-ac99-104ddf5aea0f-var-run\") pod \"ovn-controller-ovs-mbwcn\" (UID: \"02b8ba43-5172-4bac-ac99-104ddf5aea0f\") " pod="openstack/ovn-controller-ovs-mbwcn" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.534201 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c-var-run-ovn\") pod \"ovn-controller-x2x2g\" (UID: \"19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c\") " pod="openstack/ovn-controller-x2x2g" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.534231 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c-scripts\") pod \"ovn-controller-x2x2g\" (UID: \"19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c\") " pod="openstack/ovn-controller-x2x2g" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.534278 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c-combined-ca-bundle\") pod \"ovn-controller-x2x2g\" (UID: \"19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c\") " pod="openstack/ovn-controller-x2x2g" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.534305 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/02b8ba43-5172-4bac-ac99-104ddf5aea0f-var-lib\") pod \"ovn-controller-ovs-mbwcn\" (UID: \"02b8ba43-5172-4bac-ac99-104ddf5aea0f\") " pod="openstack/ovn-controller-ovs-mbwcn" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.534318 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02b8ba43-5172-4bac-ac99-104ddf5aea0f-scripts\") pod \"ovn-controller-ovs-mbwcn\" (UID: \"02b8ba43-5172-4bac-ac99-104ddf5aea0f\") " pod="openstack/ovn-controller-ovs-mbwcn" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.534348 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/02b8ba43-5172-4bac-ac99-104ddf5aea0f-etc-ovs\") pod \"ovn-controller-ovs-mbwcn\" (UID: \"02b8ba43-5172-4bac-ac99-104ddf5aea0f\") " pod="openstack/ovn-controller-ovs-mbwcn" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.534369 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c-var-log-ovn\") pod \"ovn-controller-x2x2g\" (UID: \"19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c\") " pod="openstack/ovn-controller-x2x2g" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.534390 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c-ovn-controller-tls-certs\") pod \"ovn-controller-x2x2g\" (UID: \"19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c\") " pod="openstack/ovn-controller-x2x2g" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.534405 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9srs\" (UniqueName: \"kubernetes.io/projected/02b8ba43-5172-4bac-ac99-104ddf5aea0f-kube-api-access-r9srs\") pod \"ovn-controller-ovs-mbwcn\" (UID: \"02b8ba43-5172-4bac-ac99-104ddf5aea0f\") " pod="openstack/ovn-controller-ovs-mbwcn" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.534423 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c-var-run\") pod \"ovn-controller-x2x2g\" (UID: \"19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c\") " pod="openstack/ovn-controller-x2x2g" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.638447 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c-combined-ca-bundle\") pod \"ovn-controller-x2x2g\" (UID: \"19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c\") " pod="openstack/ovn-controller-x2x2g" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.638536 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/02b8ba43-5172-4bac-ac99-104ddf5aea0f-var-lib\") pod \"ovn-controller-ovs-mbwcn\" (UID: \"02b8ba43-5172-4bac-ac99-104ddf5aea0f\") " pod="openstack/ovn-controller-ovs-mbwcn" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.638558 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02b8ba43-5172-4bac-ac99-104ddf5aea0f-scripts\") pod \"ovn-controller-ovs-mbwcn\" (UID: \"02b8ba43-5172-4bac-ac99-104ddf5aea0f\") " pod="openstack/ovn-controller-ovs-mbwcn" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.638586 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/02b8ba43-5172-4bac-ac99-104ddf5aea0f-etc-ovs\") pod \"ovn-controller-ovs-mbwcn\" (UID: \"02b8ba43-5172-4bac-ac99-104ddf5aea0f\") " pod="openstack/ovn-controller-ovs-mbwcn" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.638603 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c-var-log-ovn\") pod \"ovn-controller-x2x2g\" (UID: \"19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c\") " pod="openstack/ovn-controller-x2x2g" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.638620 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c-ovn-controller-tls-certs\") pod \"ovn-controller-x2x2g\" (UID: \"19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c\") " pod="openstack/ovn-controller-x2x2g" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.638636 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9srs\" (UniqueName: \"kubernetes.io/projected/02b8ba43-5172-4bac-ac99-104ddf5aea0f-kube-api-access-r9srs\") pod \"ovn-controller-ovs-mbwcn\" (UID: \"02b8ba43-5172-4bac-ac99-104ddf5aea0f\") " pod="openstack/ovn-controller-ovs-mbwcn" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.638657 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c-var-run\") pod \"ovn-controller-x2x2g\" (UID: \"19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c\") " pod="openstack/ovn-controller-x2x2g" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.638687 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/02b8ba43-5172-4bac-ac99-104ddf5aea0f-var-log\") pod \"ovn-controller-ovs-mbwcn\" (UID: \"02b8ba43-5172-4bac-ac99-104ddf5aea0f\") " pod="openstack/ovn-controller-ovs-mbwcn" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.638713 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krzk5\" (UniqueName: \"kubernetes.io/projected/19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c-kube-api-access-krzk5\") pod \"ovn-controller-x2x2g\" (UID: \"19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c\") " pod="openstack/ovn-controller-x2x2g" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.638762 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/02b8ba43-5172-4bac-ac99-104ddf5aea0f-var-run\") pod \"ovn-controller-ovs-mbwcn\" (UID: \"02b8ba43-5172-4bac-ac99-104ddf5aea0f\") " pod="openstack/ovn-controller-ovs-mbwcn" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.638788 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c-var-run-ovn\") pod \"ovn-controller-x2x2g\" (UID: \"19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c\") " pod="openstack/ovn-controller-x2x2g" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.638818 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c-scripts\") pod \"ovn-controller-x2x2g\" (UID: \"19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c\") " pod="openstack/ovn-controller-x2x2g" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.639206 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/02b8ba43-5172-4bac-ac99-104ddf5aea0f-etc-ovs\") pod \"ovn-controller-ovs-mbwcn\" (UID: \"02b8ba43-5172-4bac-ac99-104ddf5aea0f\") " pod="openstack/ovn-controller-ovs-mbwcn" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.639207 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/02b8ba43-5172-4bac-ac99-104ddf5aea0f-var-lib\") pod \"ovn-controller-ovs-mbwcn\" (UID: \"02b8ba43-5172-4bac-ac99-104ddf5aea0f\") " pod="openstack/ovn-controller-ovs-mbwcn" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.639354 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c-var-log-ovn\") pod \"ovn-controller-x2x2g\" (UID: \"19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c\") " pod="openstack/ovn-controller-x2x2g" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.639416 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/02b8ba43-5172-4bac-ac99-104ddf5aea0f-var-log\") pod \"ovn-controller-ovs-mbwcn\" (UID: \"02b8ba43-5172-4bac-ac99-104ddf5aea0f\") " pod="openstack/ovn-controller-ovs-mbwcn" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.639536 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/02b8ba43-5172-4bac-ac99-104ddf5aea0f-var-run\") pod \"ovn-controller-ovs-mbwcn\" (UID: \"02b8ba43-5172-4bac-ac99-104ddf5aea0f\") " pod="openstack/ovn-controller-ovs-mbwcn" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.639760 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c-var-run\") pod \"ovn-controller-x2x2g\" (UID: \"19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c\") " pod="openstack/ovn-controller-x2x2g" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.639851 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c-var-run-ovn\") pod \"ovn-controller-x2x2g\" (UID: \"19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c\") " pod="openstack/ovn-controller-x2x2g" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.641045 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c-scripts\") pod \"ovn-controller-x2x2g\" (UID: \"19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c\") " pod="openstack/ovn-controller-x2x2g" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.641195 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02b8ba43-5172-4bac-ac99-104ddf5aea0f-scripts\") pod \"ovn-controller-ovs-mbwcn\" (UID: \"02b8ba43-5172-4bac-ac99-104ddf5aea0f\") " pod="openstack/ovn-controller-ovs-mbwcn" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.645794 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c-ovn-controller-tls-certs\") pod \"ovn-controller-x2x2g\" (UID: \"19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c\") " pod="openstack/ovn-controller-x2x2g" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.657310 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c-combined-ca-bundle\") pod \"ovn-controller-x2x2g\" (UID: \"19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c\") " pod="openstack/ovn-controller-x2x2g" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.660139 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9srs\" (UniqueName: \"kubernetes.io/projected/02b8ba43-5172-4bac-ac99-104ddf5aea0f-kube-api-access-r9srs\") pod \"ovn-controller-ovs-mbwcn\" (UID: \"02b8ba43-5172-4bac-ac99-104ddf5aea0f\") " pod="openstack/ovn-controller-ovs-mbwcn" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.669946 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krzk5\" (UniqueName: \"kubernetes.io/projected/19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c-kube-api-access-krzk5\") pod \"ovn-controller-x2x2g\" (UID: \"19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c\") " pod="openstack/ovn-controller-x2x2g" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.772023 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x2x2g" Oct 14 13:30:54 crc kubenswrapper[4725]: I1014 13:30:54.819954 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-mbwcn" Oct 14 13:30:56 crc kubenswrapper[4725]: I1014 13:30:56.929377 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 14 13:30:56 crc kubenswrapper[4725]: I1014 13:30:56.940238 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 14 13:30:56 crc kubenswrapper[4725]: I1014 13:30:56.940360 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 14 13:30:56 crc kubenswrapper[4725]: I1014 13:30:56.943027 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Oct 14 13:30:56 crc kubenswrapper[4725]: I1014 13:30:56.943046 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Oct 14 13:30:56 crc kubenswrapper[4725]: I1014 13:30:56.943331 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-clptd" Oct 14 13:30:56 crc kubenswrapper[4725]: I1014 13:30:56.943881 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Oct 14 13:30:56 crc kubenswrapper[4725]: I1014 13:30:56.987631 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 13:30:57 crc kubenswrapper[4725]: I1014 13:30:57.070190 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45a32640-96e1-4f6d-9ace-039ad444fe9e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"45a32640-96e1-4f6d-9ace-039ad444fe9e\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:30:57 crc kubenswrapper[4725]: I1014 13:30:57.070247 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/45a32640-96e1-4f6d-9ace-039ad444fe9e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"45a32640-96e1-4f6d-9ace-039ad444fe9e\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:30:57 crc kubenswrapper[4725]: I1014 13:30:57.070279 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/45a32640-96e1-4f6d-9ace-039ad444fe9e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"45a32640-96e1-4f6d-9ace-039ad444fe9e\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:30:57 crc kubenswrapper[4725]: I1014 13:30:57.070309 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"45a32640-96e1-4f6d-9ace-039ad444fe9e\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:30:57 crc kubenswrapper[4725]: I1014 13:30:57.070338 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/45a32640-96e1-4f6d-9ace-039ad444fe9e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"45a32640-96e1-4f6d-9ace-039ad444fe9e\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:30:57 crc kubenswrapper[4725]: I1014 13:30:57.070386 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45a32640-96e1-4f6d-9ace-039ad444fe9e-config\") pod \"ovsdbserver-sb-0\" (UID: \"45a32640-96e1-4f6d-9ace-039ad444fe9e\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:30:57 crc kubenswrapper[4725]: I1014 13:30:57.070421 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45a32640-96e1-4f6d-9ace-039ad444fe9e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"45a32640-96e1-4f6d-9ace-039ad444fe9e\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:30:57 crc kubenswrapper[4725]: I1014 13:30:57.070464 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm6cb\" (UniqueName: \"kubernetes.io/projected/45a32640-96e1-4f6d-9ace-039ad444fe9e-kube-api-access-zm6cb\") pod \"ovsdbserver-sb-0\" (UID: \"45a32640-96e1-4f6d-9ace-039ad444fe9e\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:30:57 crc kubenswrapper[4725]: I1014 13:30:57.171640 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45a32640-96e1-4f6d-9ace-039ad444fe9e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"45a32640-96e1-4f6d-9ace-039ad444fe9e\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:30:57 crc kubenswrapper[4725]: I1014 13:30:57.171694 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/45a32640-96e1-4f6d-9ace-039ad444fe9e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"45a32640-96e1-4f6d-9ace-039ad444fe9e\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:30:57 crc kubenswrapper[4725]: I1014 13:30:57.171726 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/45a32640-96e1-4f6d-9ace-039ad444fe9e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"45a32640-96e1-4f6d-9ace-039ad444fe9e\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:30:57 crc kubenswrapper[4725]: I1014 13:30:57.171747 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"45a32640-96e1-4f6d-9ace-039ad444fe9e\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:30:57 crc kubenswrapper[4725]: I1014 13:30:57.171781 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/45a32640-96e1-4f6d-9ace-039ad444fe9e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"45a32640-96e1-4f6d-9ace-039ad444fe9e\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:30:57 crc kubenswrapper[4725]: I1014 13:30:57.171845 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45a32640-96e1-4f6d-9ace-039ad444fe9e-config\") pod \"ovsdbserver-sb-0\" (UID: \"45a32640-96e1-4f6d-9ace-039ad444fe9e\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:30:57 crc kubenswrapper[4725]: I1014 13:30:57.171895 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45a32640-96e1-4f6d-9ace-039ad444fe9e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"45a32640-96e1-4f6d-9ace-039ad444fe9e\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:30:57 crc kubenswrapper[4725]: I1014 13:30:57.171928 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm6cb\" (UniqueName: \"kubernetes.io/projected/45a32640-96e1-4f6d-9ace-039ad444fe9e-kube-api-access-zm6cb\") pod \"ovsdbserver-sb-0\" (UID: \"45a32640-96e1-4f6d-9ace-039ad444fe9e\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:30:57 crc kubenswrapper[4725]: I1014 13:30:57.172641 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"45a32640-96e1-4f6d-9ace-039ad444fe9e\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-sb-0" Oct 14 13:30:57 crc kubenswrapper[4725]: I1014 13:30:57.172712 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/45a32640-96e1-4f6d-9ace-039ad444fe9e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"45a32640-96e1-4f6d-9ace-039ad444fe9e\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:30:57 crc kubenswrapper[4725]: I1014 13:30:57.173223 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45a32640-96e1-4f6d-9ace-039ad444fe9e-config\") pod \"ovsdbserver-sb-0\" (UID: \"45a32640-96e1-4f6d-9ace-039ad444fe9e\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:30:57 crc kubenswrapper[4725]: I1014 13:30:57.173945 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45a32640-96e1-4f6d-9ace-039ad444fe9e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"45a32640-96e1-4f6d-9ace-039ad444fe9e\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:30:57 crc kubenswrapper[4725]: I1014 13:30:57.180507 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/45a32640-96e1-4f6d-9ace-039ad444fe9e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"45a32640-96e1-4f6d-9ace-039ad444fe9e\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:30:57 crc kubenswrapper[4725]: I1014 13:30:57.180744 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45a32640-96e1-4f6d-9ace-039ad444fe9e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"45a32640-96e1-4f6d-9ace-039ad444fe9e\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:30:57 crc kubenswrapper[4725]: I1014 13:30:57.180941 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/45a32640-96e1-4f6d-9ace-039ad444fe9e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"45a32640-96e1-4f6d-9ace-039ad444fe9e\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:30:57 crc kubenswrapper[4725]: I1014 13:30:57.188940 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm6cb\" (UniqueName: \"kubernetes.io/projected/45a32640-96e1-4f6d-9ace-039ad444fe9e-kube-api-access-zm6cb\") pod \"ovsdbserver-sb-0\" (UID: \"45a32640-96e1-4f6d-9ace-039ad444fe9e\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:30:57 crc kubenswrapper[4725]: I1014 13:30:57.193983 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"45a32640-96e1-4f6d-9ace-039ad444fe9e\") " pod="openstack/ovsdbserver-sb-0" Oct 14 13:30:57 crc kubenswrapper[4725]: I1014 13:30:57.264704 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 14 13:31:02 crc kubenswrapper[4725]: I1014 13:31:02.520796 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 13:31:02 crc kubenswrapper[4725]: I1014 13:31:02.521222 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 13:31:02 crc kubenswrapper[4725]: I1014 13:31:02.521291 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" Oct 14 13:31:02 crc kubenswrapper[4725]: I1014 13:31:02.522640 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aafea91841c00d0b809adb7fadf158b888ea4f36dadb61e7933d5af2c309820b"} pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 13:31:02 crc kubenswrapper[4725]: I1014 13:31:02.522753 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" containerID="cri-o://aafea91841c00d0b809adb7fadf158b888ea4f36dadb61e7933d5af2c309820b" gracePeriod=600 Oct 14 13:31:04 crc kubenswrapper[4725]: W1014 13:31:04.626385 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb08afeea_2257_428e_be50_e2cf1b5cc67e.slice/crio-aa6e15a2b01291e75edaffee8e7d12423af2379662fe30427fe0c2ca0de5bd85 WatchSource:0}: Error finding container aa6e15a2b01291e75edaffee8e7d12423af2379662fe30427fe0c2ca0de5bd85: Status 404 returned error can't find the container with id aa6e15a2b01291e75edaffee8e7d12423af2379662fe30427fe0c2ca0de5bd85 Oct 14 13:31:04 crc kubenswrapper[4725]: E1014 13:31:04.636383 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Oct 14 13:31:04 crc kubenswrapper[4725]: E1014 13:31:04.636595 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l7885,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(e690ed1d-b1fe-48b5-817c-d512cef45181): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 13:31:04 crc kubenswrapper[4725]: E1014 13:31:04.637783 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="e690ed1d-b1fe-48b5-817c-d512cef45181" Oct 14 13:31:04 crc kubenswrapper[4725]: I1014 13:31:04.922707 4725 generic.go:334] "Generic (PLEG): container finished" podID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerID="aafea91841c00d0b809adb7fadf158b888ea4f36dadb61e7933d5af2c309820b" exitCode=0 Oct 14 13:31:04 crc kubenswrapper[4725]: I1014 13:31:04.922883 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" event={"ID":"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c","Type":"ContainerDied","Data":"aafea91841c00d0b809adb7fadf158b888ea4f36dadb61e7933d5af2c309820b"} Oct 14 13:31:04 crc kubenswrapper[4725]: I1014 13:31:04.923100 4725 scope.go:117] "RemoveContainer" containerID="8af3cd49f53b953cdb857f98f2a4e4ef4b83977a9a2d06c5f02fcc6cd95add47" Oct 14 13:31:04 crc kubenswrapper[4725]: I1014 13:31:04.926258 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b08afeea-2257-428e-be50-e2cf1b5cc67e","Type":"ContainerStarted","Data":"aa6e15a2b01291e75edaffee8e7d12423af2379662fe30427fe0c2ca0de5bd85"} Oct 14 13:31:05 crc kubenswrapper[4725]: I1014 13:31:05.133911 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 14 13:31:05 crc kubenswrapper[4725]: I1014 13:31:05.194284 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 14 13:31:05 crc kubenswrapper[4725]: W1014 13:31:05.450918 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod166bc2c3_283f_4c1d_815b_54fffa8192d5.slice/crio-e4c1324ba598a172931a6347a54d2744f8bb7286a0eed42249d75f3bb0f7f412 WatchSource:0}: Error finding container e4c1324ba598a172931a6347a54d2744f8bb7286a0eed42249d75f3bb0f7f412: Status 404 returned error can't find the container with id e4c1324ba598a172931a6347a54d2744f8bb7286a0eed42249d75f3bb0f7f412 Oct 14 13:31:05 crc kubenswrapper[4725]: W1014 13:31:05.454621 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod697b603d_cd65_466b_930a_86c43c8483ba.slice/crio-978c3914e372690950645cddd308f0cdd6d93eeedfc0cd7bc520e0b6584176d8 WatchSource:0}: Error finding container 978c3914e372690950645cddd308f0cdd6d93eeedfc0cd7bc520e0b6584176d8: Status 404 returned error can't find the container with id 978c3914e372690950645cddd308f0cdd6d93eeedfc0cd7bc520e0b6584176d8 Oct 14 13:31:05 crc kubenswrapper[4725]: E1014 13:31:05.464317 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 14 13:31:05 crc kubenswrapper[4725]: E1014 13:31:05.465008 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fbvkf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-cd96f_openstack(797dd808-4202-4b79-a470-8023a6b859b0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 13:31:05 crc kubenswrapper[4725]: E1014 13:31:05.466762 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-cd96f" podUID="797dd808-4202-4b79-a470-8023a6b859b0" Oct 14 13:31:05 crc kubenswrapper[4725]: E1014 13:31:05.484649 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 14 13:31:05 crc kubenswrapper[4725]: E1014 13:31:05.484829 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jmfgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-jshb9_openstack(2c3f761a-b546-4e23-9bdd-ada6d0595b10): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 13:31:05 crc kubenswrapper[4725]: E1014 13:31:05.485991 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-jshb9" podUID="2c3f761a-b546-4e23-9bdd-ada6d0595b10" Oct 14 13:31:05 crc kubenswrapper[4725]: E1014 13:31:05.491735 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 14 13:31:05 crc kubenswrapper[4725]: E1014 13:31:05.491909 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jnrrm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-6tlzb_openstack(85c2073b-91ed-4a5a-a28c-f5c073f5e70e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 13:31:05 crc kubenswrapper[4725]: E1014 13:31:05.494286 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-6tlzb" podUID="85c2073b-91ed-4a5a-a28c-f5c073f5e70e" Oct 14 13:31:05 crc kubenswrapper[4725]: E1014 13:31:05.563491 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 14 13:31:05 crc kubenswrapper[4725]: E1014 13:31:05.563666 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-948p8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-2srqh_openstack(dd9edf73-e00e-4b3d-84ea-93201123d400): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 13:31:05 crc kubenswrapper[4725]: E1014 13:31:05.564877 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-2srqh" podUID="dd9edf73-e00e-4b3d-84ea-93201123d400" Oct 14 13:31:05 crc kubenswrapper[4725]: E1014 13:31:05.977161 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-2srqh" podUID="dd9edf73-e00e-4b3d-84ea-93201123d400" Oct 14 13:31:05 crc kubenswrapper[4725]: E1014 13:31:05.977574 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-6tlzb" podUID="85c2073b-91ed-4a5a-a28c-f5c073f5e70e" Oct 14 13:31:05 crc kubenswrapper[4725]: I1014 13:31:05.978479 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"166bc2c3-283f-4c1d-815b-54fffa8192d5","Type":"ContainerStarted","Data":"e4c1324ba598a172931a6347a54d2744f8bb7286a0eed42249d75f3bb0f7f412"} Oct 14 13:31:05 crc kubenswrapper[4725]: I1014 13:31:05.978534 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" event={"ID":"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c","Type":"ContainerStarted","Data":"ea00d1d4fd499f409fe49f6dd2f54e9fa910b8219b121324d8eb5e1f54150712"} Oct 14 13:31:05 crc kubenswrapper[4725]: I1014 13:31:05.978552 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"697b603d-cd65-466b-930a-86c43c8483ba","Type":"ContainerStarted","Data":"978c3914e372690950645cddd308f0cdd6d93eeedfc0cd7bc520e0b6584176d8"} Oct 14 13:31:06 crc kubenswrapper[4725]: I1014 13:31:06.020967 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 14 13:31:06 crc kubenswrapper[4725]: I1014 13:31:06.081936 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 14 13:31:06 crc kubenswrapper[4725]: I1014 13:31:06.099665 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-x2x2g"] Oct 14 13:31:06 crc kubenswrapper[4725]: W1014 13:31:06.140798 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb72e8db7_f91f_41b1_95bd_366cd156f5ed.slice/crio-782275102f5002fd4ea8e33e49e786da0b4d2e2dc9b76fc4f23066420923b306 WatchSource:0}: Error finding container 782275102f5002fd4ea8e33e49e786da0b4d2e2dc9b76fc4f23066420923b306: Status 404 returned error can't find the container with id 782275102f5002fd4ea8e33e49e786da0b4d2e2dc9b76fc4f23066420923b306 Oct 14 13:31:06 crc kubenswrapper[4725]: I1014 13:31:06.165402 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 14 13:31:06 crc kubenswrapper[4725]: I1014 13:31:06.265755 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-mbwcn"] Oct 14 13:31:06 crc kubenswrapper[4725]: I1014 13:31:06.693335 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-jshb9" Oct 14 13:31:06 crc kubenswrapper[4725]: I1014 13:31:06.699334 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-cd96f" Oct 14 13:31:06 crc kubenswrapper[4725]: I1014 13:31:06.807244 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbvkf\" (UniqueName: \"kubernetes.io/projected/797dd808-4202-4b79-a470-8023a6b859b0-kube-api-access-fbvkf\") pod \"797dd808-4202-4b79-a470-8023a6b859b0\" (UID: \"797dd808-4202-4b79-a470-8023a6b859b0\") " Oct 14 13:31:06 crc kubenswrapper[4725]: I1014 13:31:06.807387 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/797dd808-4202-4b79-a470-8023a6b859b0-config\") pod \"797dd808-4202-4b79-a470-8023a6b859b0\" (UID: \"797dd808-4202-4b79-a470-8023a6b859b0\") " Oct 14 13:31:06 crc kubenswrapper[4725]: I1014 13:31:06.807422 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c3f761a-b546-4e23-9bdd-ada6d0595b10-config\") pod \"2c3f761a-b546-4e23-9bdd-ada6d0595b10\" (UID: \"2c3f761a-b546-4e23-9bdd-ada6d0595b10\") " Oct 14 13:31:06 crc kubenswrapper[4725]: I1014 13:31:06.807468 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/797dd808-4202-4b79-a470-8023a6b859b0-dns-svc\") pod \"797dd808-4202-4b79-a470-8023a6b859b0\" (UID: \"797dd808-4202-4b79-a470-8023a6b859b0\") " Oct 14 13:31:06 crc kubenswrapper[4725]: I1014 13:31:06.807530 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmfgt\" (UniqueName: \"kubernetes.io/projected/2c3f761a-b546-4e23-9bdd-ada6d0595b10-kube-api-access-jmfgt\") pod \"2c3f761a-b546-4e23-9bdd-ada6d0595b10\" (UID: \"2c3f761a-b546-4e23-9bdd-ada6d0595b10\") " Oct 14 13:31:06 crc kubenswrapper[4725]: I1014 13:31:06.808371 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/797dd808-4202-4b79-a470-8023a6b859b0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "797dd808-4202-4b79-a470-8023a6b859b0" (UID: "797dd808-4202-4b79-a470-8023a6b859b0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:31:06 crc kubenswrapper[4725]: I1014 13:31:06.808514 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/797dd808-4202-4b79-a470-8023a6b859b0-config" (OuterVolumeSpecName: "config") pod "797dd808-4202-4b79-a470-8023a6b859b0" (UID: "797dd808-4202-4b79-a470-8023a6b859b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:31:06 crc kubenswrapper[4725]: I1014 13:31:06.808862 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c3f761a-b546-4e23-9bdd-ada6d0595b10-config" (OuterVolumeSpecName: "config") pod "2c3f761a-b546-4e23-9bdd-ada6d0595b10" (UID: "2c3f761a-b546-4e23-9bdd-ada6d0595b10"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:31:06 crc kubenswrapper[4725]: I1014 13:31:06.813749 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c3f761a-b546-4e23-9bdd-ada6d0595b10-kube-api-access-jmfgt" (OuterVolumeSpecName: "kube-api-access-jmfgt") pod "2c3f761a-b546-4e23-9bdd-ada6d0595b10" (UID: "2c3f761a-b546-4e23-9bdd-ada6d0595b10"). InnerVolumeSpecName "kube-api-access-jmfgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:31:06 crc kubenswrapper[4725]: I1014 13:31:06.813858 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/797dd808-4202-4b79-a470-8023a6b859b0-kube-api-access-fbvkf" (OuterVolumeSpecName: "kube-api-access-fbvkf") pod "797dd808-4202-4b79-a470-8023a6b859b0" (UID: "797dd808-4202-4b79-a470-8023a6b859b0"). InnerVolumeSpecName "kube-api-access-fbvkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:31:06 crc kubenswrapper[4725]: I1014 13:31:06.908901 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmfgt\" (UniqueName: \"kubernetes.io/projected/2c3f761a-b546-4e23-9bdd-ada6d0595b10-kube-api-access-jmfgt\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:06 crc kubenswrapper[4725]: I1014 13:31:06.908969 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbvkf\" (UniqueName: \"kubernetes.io/projected/797dd808-4202-4b79-a470-8023a6b859b0-kube-api-access-fbvkf\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:06 crc kubenswrapper[4725]: I1014 13:31:06.908992 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/797dd808-4202-4b79-a470-8023a6b859b0-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:06 crc kubenswrapper[4725]: I1014 13:31:06.909002 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c3f761a-b546-4e23-9bdd-ada6d0595b10-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:06 crc kubenswrapper[4725]: I1014 13:31:06.909024 4725 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/797dd808-4202-4b79-a470-8023a6b859b0-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:06 crc kubenswrapper[4725]: I1014 13:31:06.987259 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-jshb9" Oct 14 13:31:06 crc kubenswrapper[4725]: I1014 13:31:06.987249 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-jshb9" event={"ID":"2c3f761a-b546-4e23-9bdd-ada6d0595b10","Type":"ContainerDied","Data":"a27ab98f3e5f3d00a1990615afbfd8c3422ad557b6fc67ea7f2b83724425f603"} Oct 14 13:31:06 crc kubenswrapper[4725]: I1014 13:31:06.990346 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c","Type":"ContainerStarted","Data":"9af8ac336b7385e92f1f7700ffd3de85aee088e2f9349df4545a5e644720a771"} Oct 14 13:31:06 crc kubenswrapper[4725]: I1014 13:31:06.991906 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"cb174fae-72d3-45b5-b008-80b4fa482f1e","Type":"ContainerStarted","Data":"b14d8aa56d64a11490a1d0474a2884fb856a56ffa682c5d26f550240a3557f3e"} Oct 14 13:31:06 crc kubenswrapper[4725]: I1014 13:31:06.993908 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"45a32640-96e1-4f6d-9ace-039ad444fe9e","Type":"ContainerStarted","Data":"7d8c35ac73a5b8336c05c6d02116d30e453926289e15ddc76591f3b467be59f5"} Oct 14 13:31:06 crc kubenswrapper[4725]: I1014 13:31:06.995483 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-cd96f" event={"ID":"797dd808-4202-4b79-a470-8023a6b859b0","Type":"ContainerDied","Data":"ebcf70262ee0093aea1356acb065037fac286de9dcced4dceb690fe9a497f40b"} Oct 14 13:31:06 crc kubenswrapper[4725]: I1014 13:31:06.995489 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-cd96f" Oct 14 13:31:06 crc kubenswrapper[4725]: I1014 13:31:06.996971 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mbwcn" event={"ID":"02b8ba43-5172-4bac-ac99-104ddf5aea0f","Type":"ContainerStarted","Data":"3227c7a8cc66f9bdcb38c178db378a6bff4a4aa16538e7eee4f62ef2524770a6"} Oct 14 13:31:06 crc kubenswrapper[4725]: I1014 13:31:06.998924 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e690ed1d-b1fe-48b5-817c-d512cef45181","Type":"ContainerStarted","Data":"469d6c9dd8280483d8bdc01c4219e2d1f77837452b0087234379692a73aa3bd2"} Oct 14 13:31:07 crc kubenswrapper[4725]: I1014 13:31:07.000049 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x2x2g" event={"ID":"19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c","Type":"ContainerStarted","Data":"700caf2866de16ae0a2578dda2829d1cc5b34d6bd3327ff751293419116def1b"} Oct 14 13:31:07 crc kubenswrapper[4725]: I1014 13:31:07.001173 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b72e8db7-f91f-41b1-95bd-366cd156f5ed","Type":"ContainerStarted","Data":"782275102f5002fd4ea8e33e49e786da0b4d2e2dc9b76fc4f23066420923b306"} Oct 14 13:31:07 crc kubenswrapper[4725]: I1014 13:31:07.105776 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-cd96f"] Oct 14 13:31:07 crc kubenswrapper[4725]: I1014 13:31:07.109973 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-cd96f"] Oct 14 13:31:07 crc kubenswrapper[4725]: I1014 13:31:07.117378 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jshb9"] Oct 14 13:31:07 crc kubenswrapper[4725]: I1014 13:31:07.122042 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jshb9"] Oct 14 13:31:07 crc kubenswrapper[4725]: I1014 13:31:07.932443 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c3f761a-b546-4e23-9bdd-ada6d0595b10" path="/var/lib/kubelet/pods/2c3f761a-b546-4e23-9bdd-ada6d0595b10/volumes" Oct 14 13:31:07 crc kubenswrapper[4725]: I1014 13:31:07.933741 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="797dd808-4202-4b79-a470-8023a6b859b0" path="/var/lib/kubelet/pods/797dd808-4202-4b79-a470-8023a6b859b0/volumes" Oct 14 13:31:14 crc kubenswrapper[4725]: I1014 13:31:14.057575 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"45a32640-96e1-4f6d-9ace-039ad444fe9e","Type":"ContainerStarted","Data":"c855abf67f4437e4b71aa90fae873d1a18b29224d288250e0d5bde0256abb76f"} Oct 14 13:31:14 crc kubenswrapper[4725]: I1014 13:31:14.059540 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x2x2g" event={"ID":"19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c","Type":"ContainerStarted","Data":"a08511c364199baf0cd06f0f8a6226268321f18cb3bce4c9c681cd5f993c2358"} Oct 14 13:31:14 crc kubenswrapper[4725]: I1014 13:31:14.059626 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-x2x2g" Oct 14 13:31:14 crc kubenswrapper[4725]: I1014 13:31:14.061591 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b08afeea-2257-428e-be50-e2cf1b5cc67e","Type":"ContainerStarted","Data":"2486c7747b7a5763b53b0bb9a1d4e8ac0456911909b2c21272424b1eab3618b4"} Oct 14 13:31:14 crc kubenswrapper[4725]: I1014 13:31:14.061794 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 14 13:31:14 crc kubenswrapper[4725]: I1014 13:31:14.064660 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b72e8db7-f91f-41b1-95bd-366cd156f5ed","Type":"ContainerStarted","Data":"978db13195ed9eba4c0208756f41efedbd271e8a1d37e97e09179e874fae63b7"} Oct 14 13:31:14 crc kubenswrapper[4725]: I1014 13:31:14.066149 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"cb174fae-72d3-45b5-b008-80b4fa482f1e","Type":"ContainerStarted","Data":"1cf8edf808c31cf91fca3173a80d2932ebb58c63aa9910868dce808cb2c75bfa"} Oct 14 13:31:14 crc kubenswrapper[4725]: I1014 13:31:14.068341 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"697b603d-cd65-466b-930a-86c43c8483ba","Type":"ContainerStarted","Data":"b9268e5caf9012662e977c9b5dc05097dc1302660652941daafab03c92b762a7"} Oct 14 13:31:14 crc kubenswrapper[4725]: I1014 13:31:14.069702 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"166bc2c3-283f-4c1d-815b-54fffa8192d5","Type":"ContainerStarted","Data":"e06035b127146b105ebcebc8e2e634372a2c9d6e5be433306996093f66dbb9de"} Oct 14 13:31:14 crc kubenswrapper[4725]: I1014 13:31:14.069805 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Oct 14 13:31:14 crc kubenswrapper[4725]: I1014 13:31:14.071040 4725 generic.go:334] "Generic (PLEG): container finished" podID="02b8ba43-5172-4bac-ac99-104ddf5aea0f" containerID="57f6293c52321fa049fd361ac56457d842d1b9c230c40ac3eccecb856023d67a" exitCode=0 Oct 14 13:31:14 crc kubenswrapper[4725]: I1014 13:31:14.071080 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mbwcn" event={"ID":"02b8ba43-5172-4bac-ac99-104ddf5aea0f","Type":"ContainerDied","Data":"57f6293c52321fa049fd361ac56457d842d1b9c230c40ac3eccecb856023d67a"} Oct 14 13:31:14 crc kubenswrapper[4725]: I1014 13:31:14.084840 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-x2x2g" podStartSLOduration=13.152572456 podStartE2EDuration="20.084820952s" podCreationTimestamp="2025-10-14 13:30:54 +0000 UTC" firstStartedPulling="2025-10-14 13:31:06.167573765 +0000 UTC m=+983.016008584" lastFinishedPulling="2025-10-14 13:31:13.099822241 +0000 UTC m=+989.948257080" observedRunningTime="2025-10-14 13:31:14.080944686 +0000 UTC m=+990.929379505" watchObservedRunningTime="2025-10-14 13:31:14.084820952 +0000 UTC m=+990.933255761" Oct 14 13:31:14 crc kubenswrapper[4725]: I1014 13:31:14.147146 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=20.307623013 podStartE2EDuration="27.147128086s" podCreationTimestamp="2025-10-14 13:30:47 +0000 UTC" firstStartedPulling="2025-10-14 13:31:05.454936263 +0000 UTC m=+982.303371072" lastFinishedPulling="2025-10-14 13:31:12.294441335 +0000 UTC m=+989.142876145" observedRunningTime="2025-10-14 13:31:14.141904404 +0000 UTC m=+990.990339233" watchObservedRunningTime="2025-10-14 13:31:14.147128086 +0000 UTC m=+990.995562905" Oct 14 13:31:14 crc kubenswrapper[4725]: I1014 13:31:14.218759 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=16.810813231 podStartE2EDuration="25.218739693s" podCreationTimestamp="2025-10-14 13:30:49 +0000 UTC" firstStartedPulling="2025-10-14 13:31:04.646671789 +0000 UTC m=+981.495106608" lastFinishedPulling="2025-10-14 13:31:13.054598261 +0000 UTC m=+989.903033070" observedRunningTime="2025-10-14 13:31:14.210741506 +0000 UTC m=+991.059176315" watchObservedRunningTime="2025-10-14 13:31:14.218739693 +0000 UTC m=+991.067174512" Oct 14 13:31:15 crc kubenswrapper[4725]: I1014 13:31:15.085882 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mbwcn" event={"ID":"02b8ba43-5172-4bac-ac99-104ddf5aea0f","Type":"ContainerStarted","Data":"2dc86f20868a29875be72a3765f1e0be6b39c840b53afdc6ad7deb1a4c6b61a2"} Oct 14 13:31:15 crc kubenswrapper[4725]: I1014 13:31:15.086525 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mbwcn" event={"ID":"02b8ba43-5172-4bac-ac99-104ddf5aea0f","Type":"ContainerStarted","Data":"c2df31c3ea4b8c6f6c0cf283467ed7d33138bfd486aea5c0d00b749ddedc235a"} Oct 14 13:31:15 crc kubenswrapper[4725]: I1014 13:31:15.086638 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-mbwcn" Oct 14 13:31:15 crc kubenswrapper[4725]: I1014 13:31:15.086667 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-mbwcn" Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.101608 4725 generic.go:334] "Generic (PLEG): container finished" podID="697b603d-cd65-466b-930a-86c43c8483ba" containerID="b9268e5caf9012662e977c9b5dc05097dc1302660652941daafab03c92b762a7" exitCode=0 Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.101671 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"697b603d-cd65-466b-930a-86c43c8483ba","Type":"ContainerDied","Data":"b9268e5caf9012662e977c9b5dc05097dc1302660652941daafab03c92b762a7"} Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.106155 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"45a32640-96e1-4f6d-9ace-039ad444fe9e","Type":"ContainerStarted","Data":"1bc4a724226f7d81173da085aa506fcb935697d609079aae557d0c4ed4bde0c4"} Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.108497 4725 generic.go:334] "Generic (PLEG): container finished" podID="b72e8db7-f91f-41b1-95bd-366cd156f5ed" containerID="978db13195ed9eba4c0208756f41efedbd271e8a1d37e97e09179e874fae63b7" exitCode=0 Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.108616 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b72e8db7-f91f-41b1-95bd-366cd156f5ed","Type":"ContainerDied","Data":"978db13195ed9eba4c0208756f41efedbd271e8a1d37e97e09179e874fae63b7"} Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.111276 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"cb174fae-72d3-45b5-b008-80b4fa482f1e","Type":"ContainerStarted","Data":"4c13970ac613a92f105791c526628d9b46e86c24e865755d8e79fd8da61271f9"} Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.131474 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-mbwcn" podStartSLOduration=16.677110366 podStartE2EDuration="23.131434864s" podCreationTimestamp="2025-10-14 13:30:54 +0000 UTC" firstStartedPulling="2025-10-14 13:31:06.612844095 +0000 UTC m=+983.461278924" lastFinishedPulling="2025-10-14 13:31:13.067168623 +0000 UTC m=+989.915603422" observedRunningTime="2025-10-14 13:31:15.113438209 +0000 UTC m=+991.961873018" watchObservedRunningTime="2025-10-14 13:31:17.131434864 +0000 UTC m=+993.979869693" Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.195004 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=12.358773624 podStartE2EDuration="22.194985893s" podCreationTimestamp="2025-10-14 13:30:55 +0000 UTC" firstStartedPulling="2025-10-14 13:31:06.611684434 +0000 UTC m=+983.460119243" lastFinishedPulling="2025-10-14 13:31:16.447896703 +0000 UTC m=+993.296331512" observedRunningTime="2025-10-14 13:31:17.193096801 +0000 UTC m=+994.041531620" watchObservedRunningTime="2025-10-14 13:31:17.194985893 +0000 UTC m=+994.043420702" Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.221831 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=14.956604444 podStartE2EDuration="25.221810952s" podCreationTimestamp="2025-10-14 13:30:52 +0000 UTC" firstStartedPulling="2025-10-14 13:31:06.167137233 +0000 UTC m=+983.015572042" lastFinishedPulling="2025-10-14 13:31:16.432343741 +0000 UTC m=+993.280778550" observedRunningTime="2025-10-14 13:31:17.215242164 +0000 UTC m=+994.063676973" watchObservedRunningTime="2025-10-14 13:31:17.221810952 +0000 UTC m=+994.070245781" Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.265239 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.615182 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-lx5gg"] Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.616787 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-lx5gg" Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.620686 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.622634 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-lx5gg"] Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.803859 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d31243a3-876e-4912-a468-195483df0425-ovs-rundir\") pod \"ovn-controller-metrics-lx5gg\" (UID: \"d31243a3-876e-4912-a468-195483df0425\") " pod="openstack/ovn-controller-metrics-lx5gg" Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.803916 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgtzw\" (UniqueName: \"kubernetes.io/projected/d31243a3-876e-4912-a468-195483df0425-kube-api-access-zgtzw\") pod \"ovn-controller-metrics-lx5gg\" (UID: \"d31243a3-876e-4912-a468-195483df0425\") " pod="openstack/ovn-controller-metrics-lx5gg" Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.803951 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d31243a3-876e-4912-a468-195483df0425-ovn-rundir\") pod \"ovn-controller-metrics-lx5gg\" (UID: \"d31243a3-876e-4912-a468-195483df0425\") " pod="openstack/ovn-controller-metrics-lx5gg" Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.803977 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d31243a3-876e-4912-a468-195483df0425-combined-ca-bundle\") pod \"ovn-controller-metrics-lx5gg\" (UID: \"d31243a3-876e-4912-a468-195483df0425\") " pod="openstack/ovn-controller-metrics-lx5gg" Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.804021 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d31243a3-876e-4912-a468-195483df0425-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-lx5gg\" (UID: \"d31243a3-876e-4912-a468-195483df0425\") " pod="openstack/ovn-controller-metrics-lx5gg" Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.804061 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d31243a3-876e-4912-a468-195483df0425-config\") pod \"ovn-controller-metrics-lx5gg\" (UID: \"d31243a3-876e-4912-a468-195483df0425\") " pod="openstack/ovn-controller-metrics-lx5gg" Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.844919 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6tlzb"] Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.905550 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgtzw\" (UniqueName: \"kubernetes.io/projected/d31243a3-876e-4912-a468-195483df0425-kube-api-access-zgtzw\") pod \"ovn-controller-metrics-lx5gg\" (UID: \"d31243a3-876e-4912-a468-195483df0425\") " pod="openstack/ovn-controller-metrics-lx5gg" Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.905624 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d31243a3-876e-4912-a468-195483df0425-ovn-rundir\") pod \"ovn-controller-metrics-lx5gg\" (UID: \"d31243a3-876e-4912-a468-195483df0425\") " pod="openstack/ovn-controller-metrics-lx5gg" Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.905655 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d31243a3-876e-4912-a468-195483df0425-combined-ca-bundle\") pod \"ovn-controller-metrics-lx5gg\" (UID: \"d31243a3-876e-4912-a468-195483df0425\") " pod="openstack/ovn-controller-metrics-lx5gg" Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.905718 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d31243a3-876e-4912-a468-195483df0425-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-lx5gg\" (UID: \"d31243a3-876e-4912-a468-195483df0425\") " pod="openstack/ovn-controller-metrics-lx5gg" Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.905755 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d31243a3-876e-4912-a468-195483df0425-config\") pod \"ovn-controller-metrics-lx5gg\" (UID: \"d31243a3-876e-4912-a468-195483df0425\") " pod="openstack/ovn-controller-metrics-lx5gg" Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.905803 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d31243a3-876e-4912-a468-195483df0425-ovs-rundir\") pod \"ovn-controller-metrics-lx5gg\" (UID: \"d31243a3-876e-4912-a468-195483df0425\") " pod="openstack/ovn-controller-metrics-lx5gg" Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.906132 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d31243a3-876e-4912-a468-195483df0425-ovs-rundir\") pod \"ovn-controller-metrics-lx5gg\" (UID: \"d31243a3-876e-4912-a468-195483df0425\") " pod="openstack/ovn-controller-metrics-lx5gg" Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.906447 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d31243a3-876e-4912-a468-195483df0425-ovn-rundir\") pod \"ovn-controller-metrics-lx5gg\" (UID: \"d31243a3-876e-4912-a468-195483df0425\") " pod="openstack/ovn-controller-metrics-lx5gg" Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.907266 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d31243a3-876e-4912-a468-195483df0425-config\") pod \"ovn-controller-metrics-lx5gg\" (UID: \"d31243a3-876e-4912-a468-195483df0425\") " pod="openstack/ovn-controller-metrics-lx5gg" Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.912778 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d31243a3-876e-4912-a468-195483df0425-combined-ca-bundle\") pod \"ovn-controller-metrics-lx5gg\" (UID: \"d31243a3-876e-4912-a468-195483df0425\") " pod="openstack/ovn-controller-metrics-lx5gg" Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.937082 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d31243a3-876e-4912-a468-195483df0425-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-lx5gg\" (UID: \"d31243a3-876e-4912-a468-195483df0425\") " pod="openstack/ovn-controller-metrics-lx5gg" Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.955766 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgtzw\" (UniqueName: \"kubernetes.io/projected/d31243a3-876e-4912-a468-195483df0425-kube-api-access-zgtzw\") pod \"ovn-controller-metrics-lx5gg\" (UID: \"d31243a3-876e-4912-a468-195483df0425\") " pod="openstack/ovn-controller-metrics-lx5gg" Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.976610 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8pdcq"] Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.977817 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8pdcq"] Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.977895 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-8pdcq" Oct 14 13:31:17 crc kubenswrapper[4725]: I1014 13:31:17.980883 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.006847 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-lx5gg" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.072697 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2srqh"] Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.109337 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3e3f245-5552-4de6-9c38-2f387f6234d6-config\") pod \"dnsmasq-dns-7fd796d7df-8pdcq\" (UID: \"d3e3f245-5552-4de6-9c38-2f387f6234d6\") " pod="openstack/dnsmasq-dns-7fd796d7df-8pdcq" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.109389 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3e3f245-5552-4de6-9c38-2f387f6234d6-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-8pdcq\" (UID: \"d3e3f245-5552-4de6-9c38-2f387f6234d6\") " pod="openstack/dnsmasq-dns-7fd796d7df-8pdcq" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.109499 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c857\" (UniqueName: \"kubernetes.io/projected/d3e3f245-5552-4de6-9c38-2f387f6234d6-kube-api-access-4c857\") pod \"dnsmasq-dns-7fd796d7df-8pdcq\" (UID: \"d3e3f245-5552-4de6-9c38-2f387f6234d6\") " pod="openstack/dnsmasq-dns-7fd796d7df-8pdcq" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.109538 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3e3f245-5552-4de6-9c38-2f387f6234d6-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-8pdcq\" (UID: \"d3e3f245-5552-4de6-9c38-2f387f6234d6\") " pod="openstack/dnsmasq-dns-7fd796d7df-8pdcq" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.113500 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-dx7jg"] Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.114722 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-dx7jg" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.130630 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.132731 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-dx7jg"] Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.131162 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.144641 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b72e8db7-f91f-41b1-95bd-366cd156f5ed","Type":"ContainerStarted","Data":"c868cd490fbb08acc0e5e23424120447ef0f7d19d9446ddc06eef686bddc7799"} Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.163347 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"697b603d-cd65-466b-930a-86c43c8483ba","Type":"ContainerStarted","Data":"bd9e21eded3232c5b4e10e345f757d9cf55a381bfbdb63d184f5d58134416805"} Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.174958 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=26.305361964 podStartE2EDuration="33.174939936s" podCreationTimestamp="2025-10-14 13:30:45 +0000 UTC" firstStartedPulling="2025-10-14 13:31:06.167859413 +0000 UTC m=+983.016294232" lastFinishedPulling="2025-10-14 13:31:13.037437395 +0000 UTC m=+989.885872204" observedRunningTime="2025-10-14 13:31:18.170148386 +0000 UTC m=+995.018583195" watchObservedRunningTime="2025-10-14 13:31:18.174939936 +0000 UTC m=+995.023374745" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.211328 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e301bc4-9f27-4a05-84ed-5c194e4244d9-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-dx7jg\" (UID: \"5e301bc4-9f27-4a05-84ed-5c194e4244d9\") " pod="openstack/dnsmasq-dns-86db49b7ff-dx7jg" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.211401 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3e3f245-5552-4de6-9c38-2f387f6234d6-config\") pod \"dnsmasq-dns-7fd796d7df-8pdcq\" (UID: \"d3e3f245-5552-4de6-9c38-2f387f6234d6\") " pod="openstack/dnsmasq-dns-7fd796d7df-8pdcq" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.211440 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3e3f245-5552-4de6-9c38-2f387f6234d6-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-8pdcq\" (UID: \"d3e3f245-5552-4de6-9c38-2f387f6234d6\") " pod="openstack/dnsmasq-dns-7fd796d7df-8pdcq" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.211517 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jzbn\" (UniqueName: \"kubernetes.io/projected/5e301bc4-9f27-4a05-84ed-5c194e4244d9-kube-api-access-6jzbn\") pod \"dnsmasq-dns-86db49b7ff-dx7jg\" (UID: \"5e301bc4-9f27-4a05-84ed-5c194e4244d9\") " pod="openstack/dnsmasq-dns-86db49b7ff-dx7jg" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.211548 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e301bc4-9f27-4a05-84ed-5c194e4244d9-config\") pod \"dnsmasq-dns-86db49b7ff-dx7jg\" (UID: \"5e301bc4-9f27-4a05-84ed-5c194e4244d9\") " pod="openstack/dnsmasq-dns-86db49b7ff-dx7jg" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.211619 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c857\" (UniqueName: \"kubernetes.io/projected/d3e3f245-5552-4de6-9c38-2f387f6234d6-kube-api-access-4c857\") pod \"dnsmasq-dns-7fd796d7df-8pdcq\" (UID: \"d3e3f245-5552-4de6-9c38-2f387f6234d6\") " pod="openstack/dnsmasq-dns-7fd796d7df-8pdcq" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.211643 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e301bc4-9f27-4a05-84ed-5c194e4244d9-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-dx7jg\" (UID: \"5e301bc4-9f27-4a05-84ed-5c194e4244d9\") " pod="openstack/dnsmasq-dns-86db49b7ff-dx7jg" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.211669 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e301bc4-9f27-4a05-84ed-5c194e4244d9-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-dx7jg\" (UID: \"5e301bc4-9f27-4a05-84ed-5c194e4244d9\") " pod="openstack/dnsmasq-dns-86db49b7ff-dx7jg" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.211707 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3e3f245-5552-4de6-9c38-2f387f6234d6-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-8pdcq\" (UID: \"d3e3f245-5552-4de6-9c38-2f387f6234d6\") " pod="openstack/dnsmasq-dns-7fd796d7df-8pdcq" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.212792 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3e3f245-5552-4de6-9c38-2f387f6234d6-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-8pdcq\" (UID: \"d3e3f245-5552-4de6-9c38-2f387f6234d6\") " pod="openstack/dnsmasq-dns-7fd796d7df-8pdcq" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.213488 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3e3f245-5552-4de6-9c38-2f387f6234d6-config\") pod \"dnsmasq-dns-7fd796d7df-8pdcq\" (UID: \"d3e3f245-5552-4de6-9c38-2f387f6234d6\") " pod="openstack/dnsmasq-dns-7fd796d7df-8pdcq" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.214172 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3e3f245-5552-4de6-9c38-2f387f6234d6-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-8pdcq\" (UID: \"d3e3f245-5552-4de6-9c38-2f387f6234d6\") " pod="openstack/dnsmasq-dns-7fd796d7df-8pdcq" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.227667 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=24.644993944 podStartE2EDuration="32.227650449s" podCreationTimestamp="2025-10-14 13:30:46 +0000 UTC" firstStartedPulling="2025-10-14 13:31:05.45666204 +0000 UTC m=+982.305096849" lastFinishedPulling="2025-10-14 13:31:13.039318545 +0000 UTC m=+989.887753354" observedRunningTime="2025-10-14 13:31:18.226615032 +0000 UTC m=+995.075049861" watchObservedRunningTime="2025-10-14 13:31:18.227650449 +0000 UTC m=+995.076085278" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.231467 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.238328 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c857\" (UniqueName: \"kubernetes.io/projected/d3e3f245-5552-4de6-9c38-2f387f6234d6-kube-api-access-4c857\") pod \"dnsmasq-dns-7fd796d7df-8pdcq\" (UID: \"d3e3f245-5552-4de6-9c38-2f387f6234d6\") " pod="openstack/dnsmasq-dns-7fd796d7df-8pdcq" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.264466 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.265658 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.313558 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jzbn\" (UniqueName: \"kubernetes.io/projected/5e301bc4-9f27-4a05-84ed-5c194e4244d9-kube-api-access-6jzbn\") pod \"dnsmasq-dns-86db49b7ff-dx7jg\" (UID: \"5e301bc4-9f27-4a05-84ed-5c194e4244d9\") " pod="openstack/dnsmasq-dns-86db49b7ff-dx7jg" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.313601 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e301bc4-9f27-4a05-84ed-5c194e4244d9-config\") pod \"dnsmasq-dns-86db49b7ff-dx7jg\" (UID: \"5e301bc4-9f27-4a05-84ed-5c194e4244d9\") " pod="openstack/dnsmasq-dns-86db49b7ff-dx7jg" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.313793 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e301bc4-9f27-4a05-84ed-5c194e4244d9-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-dx7jg\" (UID: \"5e301bc4-9f27-4a05-84ed-5c194e4244d9\") " pod="openstack/dnsmasq-dns-86db49b7ff-dx7jg" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.313818 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e301bc4-9f27-4a05-84ed-5c194e4244d9-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-dx7jg\" (UID: \"5e301bc4-9f27-4a05-84ed-5c194e4244d9\") " pod="openstack/dnsmasq-dns-86db49b7ff-dx7jg" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.313908 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e301bc4-9f27-4a05-84ed-5c194e4244d9-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-dx7jg\" (UID: \"5e301bc4-9f27-4a05-84ed-5c194e4244d9\") " pod="openstack/dnsmasq-dns-86db49b7ff-dx7jg" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.317669 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e301bc4-9f27-4a05-84ed-5c194e4244d9-config\") pod \"dnsmasq-dns-86db49b7ff-dx7jg\" (UID: \"5e301bc4-9f27-4a05-84ed-5c194e4244d9\") " pod="openstack/dnsmasq-dns-86db49b7ff-dx7jg" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.321038 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e301bc4-9f27-4a05-84ed-5c194e4244d9-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-dx7jg\" (UID: \"5e301bc4-9f27-4a05-84ed-5c194e4244d9\") " pod="openstack/dnsmasq-dns-86db49b7ff-dx7jg" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.321553 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e301bc4-9f27-4a05-84ed-5c194e4244d9-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-dx7jg\" (UID: \"5e301bc4-9f27-4a05-84ed-5c194e4244d9\") " pod="openstack/dnsmasq-dns-86db49b7ff-dx7jg" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.322542 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e301bc4-9f27-4a05-84ed-5c194e4244d9-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-dx7jg\" (UID: \"5e301bc4-9f27-4a05-84ed-5c194e4244d9\") " pod="openstack/dnsmasq-dns-86db49b7ff-dx7jg" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.328103 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-8pdcq" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.328864 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.332484 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-6tlzb" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.348349 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jzbn\" (UniqueName: \"kubernetes.io/projected/5e301bc4-9f27-4a05-84ed-5c194e4244d9-kube-api-access-6jzbn\") pod \"dnsmasq-dns-86db49b7ff-dx7jg\" (UID: \"5e301bc4-9f27-4a05-84ed-5c194e4244d9\") " pod="openstack/dnsmasq-dns-86db49b7ff-dx7jg" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.415574 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85c2073b-91ed-4a5a-a28c-f5c073f5e70e-config\") pod \"85c2073b-91ed-4a5a-a28c-f5c073f5e70e\" (UID: \"85c2073b-91ed-4a5a-a28c-f5c073f5e70e\") " Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.418548 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85c2073b-91ed-4a5a-a28c-f5c073f5e70e-config" (OuterVolumeSpecName: "config") pod "85c2073b-91ed-4a5a-a28c-f5c073f5e70e" (UID: "85c2073b-91ed-4a5a-a28c-f5c073f5e70e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.418783 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnrrm\" (UniqueName: \"kubernetes.io/projected/85c2073b-91ed-4a5a-a28c-f5c073f5e70e-kube-api-access-jnrrm\") pod \"85c2073b-91ed-4a5a-a28c-f5c073f5e70e\" (UID: \"85c2073b-91ed-4a5a-a28c-f5c073f5e70e\") " Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.418826 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85c2073b-91ed-4a5a-a28c-f5c073f5e70e-dns-svc\") pod \"85c2073b-91ed-4a5a-a28c-f5c073f5e70e\" (UID: \"85c2073b-91ed-4a5a-a28c-f5c073f5e70e\") " Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.420896 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85c2073b-91ed-4a5a-a28c-f5c073f5e70e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "85c2073b-91ed-4a5a-a28c-f5c073f5e70e" (UID: "85c2073b-91ed-4a5a-a28c-f5c073f5e70e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.421354 4725 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85c2073b-91ed-4a5a-a28c-f5c073f5e70e-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.421371 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85c2073b-91ed-4a5a-a28c-f5c073f5e70e-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.423129 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85c2073b-91ed-4a5a-a28c-f5c073f5e70e-kube-api-access-jnrrm" (OuterVolumeSpecName: "kube-api-access-jnrrm") pod "85c2073b-91ed-4a5a-a28c-f5c073f5e70e" (UID: "85c2073b-91ed-4a5a-a28c-f5c073f5e70e"). InnerVolumeSpecName "kube-api-access-jnrrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.455597 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-dx7jg" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.530423 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnrrm\" (UniqueName: \"kubernetes.io/projected/85c2073b-91ed-4a5a-a28c-f5c073f5e70e-kube-api-access-jnrrm\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.581328 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2srqh" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.635657 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8pdcq"] Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.658496 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-lx5gg"] Oct 14 13:31:18 crc kubenswrapper[4725]: W1014 13:31:18.660495 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd31243a3_876e_4912_a468_195483df0425.slice/crio-4c24d562b359538e4f1614a824858f37952a49c7e6be1623a3be190f20fb0ba5 WatchSource:0}: Error finding container 4c24d562b359538e4f1614a824858f37952a49c7e6be1623a3be190f20fb0ba5: Status 404 returned error can't find the container with id 4c24d562b359538e4f1614a824858f37952a49c7e6be1623a3be190f20fb0ba5 Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.734910 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd9edf73-e00e-4b3d-84ea-93201123d400-config\") pod \"dd9edf73-e00e-4b3d-84ea-93201123d400\" (UID: \"dd9edf73-e00e-4b3d-84ea-93201123d400\") " Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.734995 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-948p8\" (UniqueName: \"kubernetes.io/projected/dd9edf73-e00e-4b3d-84ea-93201123d400-kube-api-access-948p8\") pod \"dd9edf73-e00e-4b3d-84ea-93201123d400\" (UID: \"dd9edf73-e00e-4b3d-84ea-93201123d400\") " Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.735093 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd9edf73-e00e-4b3d-84ea-93201123d400-dns-svc\") pod \"dd9edf73-e00e-4b3d-84ea-93201123d400\" (UID: \"dd9edf73-e00e-4b3d-84ea-93201123d400\") " Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.735408 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd9edf73-e00e-4b3d-84ea-93201123d400-config" (OuterVolumeSpecName: "config") pod "dd9edf73-e00e-4b3d-84ea-93201123d400" (UID: "dd9edf73-e00e-4b3d-84ea-93201123d400"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.735896 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd9edf73-e00e-4b3d-84ea-93201123d400-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dd9edf73-e00e-4b3d-84ea-93201123d400" (UID: "dd9edf73-e00e-4b3d-84ea-93201123d400"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.738394 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd9edf73-e00e-4b3d-84ea-93201123d400-kube-api-access-948p8" (OuterVolumeSpecName: "kube-api-access-948p8") pod "dd9edf73-e00e-4b3d-84ea-93201123d400" (UID: "dd9edf73-e00e-4b3d-84ea-93201123d400"). InnerVolumeSpecName "kube-api-access-948p8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.837709 4725 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd9edf73-e00e-4b3d-84ea-93201123d400-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.837768 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd9edf73-e00e-4b3d-84ea-93201123d400-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.837817 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-948p8\" (UniqueName: \"kubernetes.io/projected/dd9edf73-e00e-4b3d-84ea-93201123d400-kube-api-access-948p8\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:18 crc kubenswrapper[4725]: I1014 13:31:18.962460 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-dx7jg"] Oct 14 13:31:18 crc kubenswrapper[4725]: W1014 13:31:18.968935 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e301bc4_9f27_4a05_84ed_5c194e4244d9.slice/crio-048ae68be5ceee89983f408d812930dc04a297831b9a26e64c596423e8bbda63 WatchSource:0}: Error finding container 048ae68be5ceee89983f408d812930dc04a297831b9a26e64c596423e8bbda63: Status 404 returned error can't find the container with id 048ae68be5ceee89983f408d812930dc04a297831b9a26e64c596423e8bbda63 Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.118530 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.165952 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.174696 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2srqh" event={"ID":"dd9edf73-e00e-4b3d-84ea-93201123d400","Type":"ContainerDied","Data":"33a7a5d468119e1a3e6ca620fd8132ee08eedf95a7fc4c3ae35f6641b37ced8b"} Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.174714 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2srqh" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.175818 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-dx7jg" event={"ID":"5e301bc4-9f27-4a05-84ed-5c194e4244d9","Type":"ContainerStarted","Data":"048ae68be5ceee89983f408d812930dc04a297831b9a26e64c596423e8bbda63"} Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.177645 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-8pdcq" event={"ID":"d3e3f245-5552-4de6-9c38-2f387f6234d6","Type":"ContainerStarted","Data":"9c7122cf0bd9bbcdd8f26856b7758ba79e950e6b9d5007497b9f3cd97e7fc195"} Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.179840 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-6tlzb" event={"ID":"85c2073b-91ed-4a5a-a28c-f5c073f5e70e","Type":"ContainerDied","Data":"03887b87a5232fa61468ace1642dd4667c071858a0b97bacd4535176e68ec37a"} Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.179950 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-6tlzb" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.197859 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-lx5gg" event={"ID":"d31243a3-876e-4912-a468-195483df0425","Type":"ContainerStarted","Data":"1ad01f9c0299e9398e4683b1a7ff2d7175a0f8827a708591b61371b357fe5403"} Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.197923 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-lx5gg" event={"ID":"d31243a3-876e-4912-a468-195483df0425","Type":"ContainerStarted","Data":"4c24d562b359538e4f1614a824858f37952a49c7e6be1623a3be190f20fb0ba5"} Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.239908 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.257165 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2srqh"] Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.266529 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2srqh"] Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.266549 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-lx5gg" podStartSLOduration=2.266522035 podStartE2EDuration="2.266522035s" podCreationTimestamp="2025-10-14 13:31:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:31:19.252191615 +0000 UTC m=+996.100626464" watchObservedRunningTime="2025-10-14 13:31:19.266522035 +0000 UTC m=+996.114956844" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.304592 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6tlzb"] Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.316692 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6tlzb"] Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.580019 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.581409 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.583395 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.584302 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.584586 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-mrwgt" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.584595 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.593847 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.651528 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0431437b-b27f-4e47-8a60-6aecd1148270-scripts\") pod \"ovn-northd-0\" (UID: \"0431437b-b27f-4e47-8a60-6aecd1148270\") " pod="openstack/ovn-northd-0" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.651955 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0431437b-b27f-4e47-8a60-6aecd1148270-config\") pod \"ovn-northd-0\" (UID: \"0431437b-b27f-4e47-8a60-6aecd1148270\") " pod="openstack/ovn-northd-0" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.651976 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0431437b-b27f-4e47-8a60-6aecd1148270-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0431437b-b27f-4e47-8a60-6aecd1148270\") " pod="openstack/ovn-northd-0" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.652006 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0431437b-b27f-4e47-8a60-6aecd1148270-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0431437b-b27f-4e47-8a60-6aecd1148270\") " pod="openstack/ovn-northd-0" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.652082 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0431437b-b27f-4e47-8a60-6aecd1148270-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0431437b-b27f-4e47-8a60-6aecd1148270\") " pod="openstack/ovn-northd-0" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.652138 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk6h8\" (UniqueName: \"kubernetes.io/projected/0431437b-b27f-4e47-8a60-6aecd1148270-kube-api-access-nk6h8\") pod \"ovn-northd-0\" (UID: \"0431437b-b27f-4e47-8a60-6aecd1148270\") " pod="openstack/ovn-northd-0" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.652511 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0431437b-b27f-4e47-8a60-6aecd1148270-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0431437b-b27f-4e47-8a60-6aecd1148270\") " pod="openstack/ovn-northd-0" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.754060 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0431437b-b27f-4e47-8a60-6aecd1148270-config\") pod \"ovn-northd-0\" (UID: \"0431437b-b27f-4e47-8a60-6aecd1148270\") " pod="openstack/ovn-northd-0" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.754180 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0431437b-b27f-4e47-8a60-6aecd1148270-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0431437b-b27f-4e47-8a60-6aecd1148270\") " pod="openstack/ovn-northd-0" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.754273 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0431437b-b27f-4e47-8a60-6aecd1148270-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0431437b-b27f-4e47-8a60-6aecd1148270\") " pod="openstack/ovn-northd-0" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.754344 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0431437b-b27f-4e47-8a60-6aecd1148270-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0431437b-b27f-4e47-8a60-6aecd1148270\") " pod="openstack/ovn-northd-0" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.754428 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk6h8\" (UniqueName: \"kubernetes.io/projected/0431437b-b27f-4e47-8a60-6aecd1148270-kube-api-access-nk6h8\") pod \"ovn-northd-0\" (UID: \"0431437b-b27f-4e47-8a60-6aecd1148270\") " pod="openstack/ovn-northd-0" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.754572 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0431437b-b27f-4e47-8a60-6aecd1148270-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0431437b-b27f-4e47-8a60-6aecd1148270\") " pod="openstack/ovn-northd-0" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.754645 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0431437b-b27f-4e47-8a60-6aecd1148270-scripts\") pod \"ovn-northd-0\" (UID: \"0431437b-b27f-4e47-8a60-6aecd1148270\") " pod="openstack/ovn-northd-0" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.755207 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0431437b-b27f-4e47-8a60-6aecd1148270-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0431437b-b27f-4e47-8a60-6aecd1148270\") " pod="openstack/ovn-northd-0" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.755606 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0431437b-b27f-4e47-8a60-6aecd1148270-scripts\") pod \"ovn-northd-0\" (UID: \"0431437b-b27f-4e47-8a60-6aecd1148270\") " pod="openstack/ovn-northd-0" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.756097 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0431437b-b27f-4e47-8a60-6aecd1148270-config\") pod \"ovn-northd-0\" (UID: \"0431437b-b27f-4e47-8a60-6aecd1148270\") " pod="openstack/ovn-northd-0" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.759712 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0431437b-b27f-4e47-8a60-6aecd1148270-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0431437b-b27f-4e47-8a60-6aecd1148270\") " pod="openstack/ovn-northd-0" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.761544 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0431437b-b27f-4e47-8a60-6aecd1148270-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0431437b-b27f-4e47-8a60-6aecd1148270\") " pod="openstack/ovn-northd-0" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.763059 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0431437b-b27f-4e47-8a60-6aecd1148270-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0431437b-b27f-4e47-8a60-6aecd1148270\") " pod="openstack/ovn-northd-0" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.776816 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk6h8\" (UniqueName: \"kubernetes.io/projected/0431437b-b27f-4e47-8a60-6aecd1148270-kube-api-access-nk6h8\") pod \"ovn-northd-0\" (UID: \"0431437b-b27f-4e47-8a60-6aecd1148270\") " pod="openstack/ovn-northd-0" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.932109 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85c2073b-91ed-4a5a-a28c-f5c073f5e70e" path="/var/lib/kubelet/pods/85c2073b-91ed-4a5a-a28c-f5c073f5e70e/volumes" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.932503 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd9edf73-e00e-4b3d-84ea-93201123d400" path="/var/lib/kubelet/pods/dd9edf73-e00e-4b3d-84ea-93201123d400/volumes" Oct 14 13:31:19 crc kubenswrapper[4725]: I1014 13:31:19.978675 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 14 13:31:20 crc kubenswrapper[4725]: I1014 13:31:20.055201 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 14 13:31:20 crc kubenswrapper[4725]: I1014 13:31:20.111059 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8pdcq"] Oct 14 13:31:20 crc kubenswrapper[4725]: I1014 13:31:20.148088 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-kqvmq"] Oct 14 13:31:20 crc kubenswrapper[4725]: I1014 13:31:20.149858 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-kqvmq" Oct 14 13:31:20 crc kubenswrapper[4725]: I1014 13:31:20.203251 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-kqvmq"] Oct 14 13:31:20 crc kubenswrapper[4725]: I1014 13:31:20.248303 4725 generic.go:334] "Generic (PLEG): container finished" podID="5e301bc4-9f27-4a05-84ed-5c194e4244d9" containerID="de8aebdbdcb14d602266c7e3f4aada7ce3f6ed4ccbb5304610a7f9b3cf3f75ba" exitCode=0 Oct 14 13:31:20 crc kubenswrapper[4725]: I1014 13:31:20.248410 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-dx7jg" event={"ID":"5e301bc4-9f27-4a05-84ed-5c194e4244d9","Type":"ContainerDied","Data":"de8aebdbdcb14d602266c7e3f4aada7ce3f6ed4ccbb5304610a7f9b3cf3f75ba"} Oct 14 13:31:20 crc kubenswrapper[4725]: I1014 13:31:20.263397 4725 generic.go:334] "Generic (PLEG): container finished" podID="d3e3f245-5552-4de6-9c38-2f387f6234d6" containerID="cc8aaf36d87737dcb9c65fb22dcb90404b7863ec644aeb919b45392a9ce1ea40" exitCode=0 Oct 14 13:31:20 crc kubenswrapper[4725]: I1014 13:31:20.263724 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-8pdcq" event={"ID":"d3e3f245-5552-4de6-9c38-2f387f6234d6","Type":"ContainerDied","Data":"cc8aaf36d87737dcb9c65fb22dcb90404b7863ec644aeb919b45392a9ce1ea40"} Oct 14 13:31:20 crc kubenswrapper[4725]: I1014 13:31:20.271055 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c88n6\" (UniqueName: \"kubernetes.io/projected/4dfca925-34cf-487b-a4ad-5a16a8f24b65-kube-api-access-c88n6\") pod \"dnsmasq-dns-698758b865-kqvmq\" (UID: \"4dfca925-34cf-487b-a4ad-5a16a8f24b65\") " pod="openstack/dnsmasq-dns-698758b865-kqvmq" Oct 14 13:31:20 crc kubenswrapper[4725]: I1014 13:31:20.271112 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dfca925-34cf-487b-a4ad-5a16a8f24b65-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-kqvmq\" (UID: \"4dfca925-34cf-487b-a4ad-5a16a8f24b65\") " pod="openstack/dnsmasq-dns-698758b865-kqvmq" Oct 14 13:31:20 crc kubenswrapper[4725]: I1014 13:31:20.271169 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dfca925-34cf-487b-a4ad-5a16a8f24b65-dns-svc\") pod \"dnsmasq-dns-698758b865-kqvmq\" (UID: \"4dfca925-34cf-487b-a4ad-5a16a8f24b65\") " pod="openstack/dnsmasq-dns-698758b865-kqvmq" Oct 14 13:31:20 crc kubenswrapper[4725]: I1014 13:31:20.271229 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dfca925-34cf-487b-a4ad-5a16a8f24b65-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-kqvmq\" (UID: \"4dfca925-34cf-487b-a4ad-5a16a8f24b65\") " pod="openstack/dnsmasq-dns-698758b865-kqvmq" Oct 14 13:31:20 crc kubenswrapper[4725]: I1014 13:31:20.271256 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dfca925-34cf-487b-a4ad-5a16a8f24b65-config\") pod \"dnsmasq-dns-698758b865-kqvmq\" (UID: \"4dfca925-34cf-487b-a4ad-5a16a8f24b65\") " pod="openstack/dnsmasq-dns-698758b865-kqvmq" Oct 14 13:31:20 crc kubenswrapper[4725]: I1014 13:31:20.372647 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dfca925-34cf-487b-a4ad-5a16a8f24b65-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-kqvmq\" (UID: \"4dfca925-34cf-487b-a4ad-5a16a8f24b65\") " pod="openstack/dnsmasq-dns-698758b865-kqvmq" Oct 14 13:31:20 crc kubenswrapper[4725]: I1014 13:31:20.372875 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dfca925-34cf-487b-a4ad-5a16a8f24b65-config\") pod \"dnsmasq-dns-698758b865-kqvmq\" (UID: \"4dfca925-34cf-487b-a4ad-5a16a8f24b65\") " pod="openstack/dnsmasq-dns-698758b865-kqvmq" Oct 14 13:31:20 crc kubenswrapper[4725]: I1014 13:31:20.373064 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c88n6\" (UniqueName: \"kubernetes.io/projected/4dfca925-34cf-487b-a4ad-5a16a8f24b65-kube-api-access-c88n6\") pod \"dnsmasq-dns-698758b865-kqvmq\" (UID: \"4dfca925-34cf-487b-a4ad-5a16a8f24b65\") " pod="openstack/dnsmasq-dns-698758b865-kqvmq" Oct 14 13:31:20 crc kubenswrapper[4725]: I1014 13:31:20.373095 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dfca925-34cf-487b-a4ad-5a16a8f24b65-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-kqvmq\" (UID: \"4dfca925-34cf-487b-a4ad-5a16a8f24b65\") " pod="openstack/dnsmasq-dns-698758b865-kqvmq" Oct 14 13:31:20 crc kubenswrapper[4725]: I1014 13:31:20.373373 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dfca925-34cf-487b-a4ad-5a16a8f24b65-dns-svc\") pod \"dnsmasq-dns-698758b865-kqvmq\" (UID: \"4dfca925-34cf-487b-a4ad-5a16a8f24b65\") " pod="openstack/dnsmasq-dns-698758b865-kqvmq" Oct 14 13:31:20 crc kubenswrapper[4725]: I1014 13:31:20.375143 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dfca925-34cf-487b-a4ad-5a16a8f24b65-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-kqvmq\" (UID: \"4dfca925-34cf-487b-a4ad-5a16a8f24b65\") " pod="openstack/dnsmasq-dns-698758b865-kqvmq" Oct 14 13:31:20 crc kubenswrapper[4725]: I1014 13:31:20.375805 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dfca925-34cf-487b-a4ad-5a16a8f24b65-config\") pod \"dnsmasq-dns-698758b865-kqvmq\" (UID: \"4dfca925-34cf-487b-a4ad-5a16a8f24b65\") " pod="openstack/dnsmasq-dns-698758b865-kqvmq" Oct 14 13:31:20 crc kubenswrapper[4725]: I1014 13:31:20.375805 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dfca925-34cf-487b-a4ad-5a16a8f24b65-dns-svc\") pod \"dnsmasq-dns-698758b865-kqvmq\" (UID: \"4dfca925-34cf-487b-a4ad-5a16a8f24b65\") " pod="openstack/dnsmasq-dns-698758b865-kqvmq" Oct 14 13:31:20 crc kubenswrapper[4725]: I1014 13:31:20.375970 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dfca925-34cf-487b-a4ad-5a16a8f24b65-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-kqvmq\" (UID: \"4dfca925-34cf-487b-a4ad-5a16a8f24b65\") " pod="openstack/dnsmasq-dns-698758b865-kqvmq" Oct 14 13:31:20 crc kubenswrapper[4725]: I1014 13:31:20.393286 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c88n6\" (UniqueName: \"kubernetes.io/projected/4dfca925-34cf-487b-a4ad-5a16a8f24b65-kube-api-access-c88n6\") pod \"dnsmasq-dns-698758b865-kqvmq\" (UID: \"4dfca925-34cf-487b-a4ad-5a16a8f24b65\") " pod="openstack/dnsmasq-dns-698758b865-kqvmq" Oct 14 13:31:20 crc kubenswrapper[4725]: E1014 13:31:20.513232 4725 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Oct 14 13:31:20 crc kubenswrapper[4725]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/d3e3f245-5552-4de6-9c38-2f387f6234d6/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Oct 14 13:31:20 crc kubenswrapper[4725]: > podSandboxID="9c7122cf0bd9bbcdd8f26856b7758ba79e950e6b9d5007497b9f3cd97e7fc195" Oct 14 13:31:20 crc kubenswrapper[4725]: E1014 13:31:20.513406 4725 kuberuntime_manager.go:1274] "Unhandled Error" err=< Oct 14 13:31:20 crc kubenswrapper[4725]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5bfh5d7h8hd8h664h564hfbh5d4h5f5h55h5fch66h675hb8h65bh64dhbh5dchc9h66fh5dbhf4h658h64ch55bhbh65h55dh597h68dh579hbdq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4c857,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7fd796d7df-8pdcq_openstack(d3e3f245-5552-4de6-9c38-2f387f6234d6): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/d3e3f245-5552-4de6-9c38-2f387f6234d6/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Oct 14 13:31:20 crc kubenswrapper[4725]: > logger="UnhandledError" Oct 14 13:31:20 crc kubenswrapper[4725]: E1014 13:31:20.514758 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/d3e3f245-5552-4de6-9c38-2f387f6234d6/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-7fd796d7df-8pdcq" podUID="d3e3f245-5552-4de6-9c38-2f387f6234d6" Oct 14 13:31:20 crc kubenswrapper[4725]: I1014 13:31:20.530501 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-kqvmq" Oct 14 13:31:20 crc kubenswrapper[4725]: I1014 13:31:20.606623 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 14 13:31:20 crc kubenswrapper[4725]: W1014 13:31:20.625127 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0431437b_b27f_4e47_8a60_6aecd1148270.slice/crio-043419c933f78391341d1b2269215f71cae861109fb43408f6f6c8fee9249bb5 WatchSource:0}: Error finding container 043419c933f78391341d1b2269215f71cae861109fb43408f6f6c8fee9249bb5: Status 404 returned error can't find the container with id 043419c933f78391341d1b2269215f71cae861109fb43408f6f6c8fee9249bb5 Oct 14 13:31:20 crc kubenswrapper[4725]: E1014 13:31:20.718518 4725 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Oct 14 13:31:20 crc kubenswrapper[4725]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/5e301bc4-9f27-4a05-84ed-5c194e4244d9/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Oct 14 13:31:20 crc kubenswrapper[4725]: > podSandboxID="048ae68be5ceee89983f408d812930dc04a297831b9a26e64c596423e8bbda63" Oct 14 13:31:20 crc kubenswrapper[4725]: E1014 13:31:20.719001 4725 kuberuntime_manager.go:1274] "Unhandled Error" err=< Oct 14 13:31:20 crc kubenswrapper[4725]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n599h5cbh7ch5d4h66fh676hdbh546h95h88h5ffh55ch7fhch57ch687hddhc7h5fdh57dh674h56fh64ch98h9bh557h55dh646h54ch54fh5c4h597q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jzbn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-86db49b7ff-dx7jg_openstack(5e301bc4-9f27-4a05-84ed-5c194e4244d9): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/5e301bc4-9f27-4a05-84ed-5c194e4244d9/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Oct 14 13:31:20 crc kubenswrapper[4725]: > logger="UnhandledError" Oct 14 13:31:20 crc kubenswrapper[4725]: E1014 13:31:20.720620 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/5e301bc4-9f27-4a05-84ed-5c194e4244d9/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-86db49b7ff-dx7jg" podUID="5e301bc4-9f27-4a05-84ed-5c194e4244d9" Oct 14 13:31:20 crc kubenswrapper[4725]: I1014 13:31:20.956339 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-kqvmq"] Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.245315 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.250410 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.253378 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.253575 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.253882 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.254629 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-p48nl" Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.273000 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.287395 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8b115803-e57f-4651-8a38-9b1aece05cdf-cache\") pod \"swift-storage-0\" (UID: \"8b115803-e57f-4651-8a38-9b1aece05cdf\") " pod="openstack/swift-storage-0" Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.287507 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"8b115803-e57f-4651-8a38-9b1aece05cdf\") " pod="openstack/swift-storage-0" Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.287571 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b115803-e57f-4651-8a38-9b1aece05cdf-etc-swift\") pod \"swift-storage-0\" (UID: \"8b115803-e57f-4651-8a38-9b1aece05cdf\") " pod="openstack/swift-storage-0" Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.287595 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8b115803-e57f-4651-8a38-9b1aece05cdf-lock\") pod \"swift-storage-0\" (UID: \"8b115803-e57f-4651-8a38-9b1aece05cdf\") " pod="openstack/swift-storage-0" Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.287616 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-994s9\" (UniqueName: \"kubernetes.io/projected/8b115803-e57f-4651-8a38-9b1aece05cdf-kube-api-access-994s9\") pod \"swift-storage-0\" (UID: \"8b115803-e57f-4651-8a38-9b1aece05cdf\") " pod="openstack/swift-storage-0" Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.293746 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0431437b-b27f-4e47-8a60-6aecd1148270","Type":"ContainerStarted","Data":"043419c933f78391341d1b2269215f71cae861109fb43408f6f6c8fee9249bb5"} Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.295785 4725 generic.go:334] "Generic (PLEG): container finished" podID="4dfca925-34cf-487b-a4ad-5a16a8f24b65" containerID="14c109369e3be6f3172cf0a59123f0ddb647b1f34dec478eddbc0a0b1a8fb9ac" exitCode=0 Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.295855 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-kqvmq" event={"ID":"4dfca925-34cf-487b-a4ad-5a16a8f24b65","Type":"ContainerDied","Data":"14c109369e3be6f3172cf0a59123f0ddb647b1f34dec478eddbc0a0b1a8fb9ac"} Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.295916 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-kqvmq" event={"ID":"4dfca925-34cf-487b-a4ad-5a16a8f24b65","Type":"ContainerStarted","Data":"11f8c3b74deb0633cf1e93d0921358edf764e3694a51cebe2de9b36bbad48362"} Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.389314 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"8b115803-e57f-4651-8a38-9b1aece05cdf\") " pod="openstack/swift-storage-0" Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.389761 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"8b115803-e57f-4651-8a38-9b1aece05cdf\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/swift-storage-0" Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.391519 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b115803-e57f-4651-8a38-9b1aece05cdf-etc-swift\") pod \"swift-storage-0\" (UID: \"8b115803-e57f-4651-8a38-9b1aece05cdf\") " pod="openstack/swift-storage-0" Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.391600 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8b115803-e57f-4651-8a38-9b1aece05cdf-lock\") pod \"swift-storage-0\" (UID: \"8b115803-e57f-4651-8a38-9b1aece05cdf\") " pod="openstack/swift-storage-0" Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.391623 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-994s9\" (UniqueName: \"kubernetes.io/projected/8b115803-e57f-4651-8a38-9b1aece05cdf-kube-api-access-994s9\") pod \"swift-storage-0\" (UID: \"8b115803-e57f-4651-8a38-9b1aece05cdf\") " pod="openstack/swift-storage-0" Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.391943 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8b115803-e57f-4651-8a38-9b1aece05cdf-cache\") pod \"swift-storage-0\" (UID: \"8b115803-e57f-4651-8a38-9b1aece05cdf\") " pod="openstack/swift-storage-0" Oct 14 13:31:21 crc kubenswrapper[4725]: E1014 13:31:21.392202 4725 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 14 13:31:21 crc kubenswrapper[4725]: E1014 13:31:21.392218 4725 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 14 13:31:21 crc kubenswrapper[4725]: E1014 13:31:21.392257 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b115803-e57f-4651-8a38-9b1aece05cdf-etc-swift podName:8b115803-e57f-4651-8a38-9b1aece05cdf nodeName:}" failed. No retries permitted until 2025-10-14 13:31:21.892242151 +0000 UTC m=+998.740676960 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8b115803-e57f-4651-8a38-9b1aece05cdf-etc-swift") pod "swift-storage-0" (UID: "8b115803-e57f-4651-8a38-9b1aece05cdf") : configmap "swift-ring-files" not found Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.392897 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8b115803-e57f-4651-8a38-9b1aece05cdf-cache\") pod \"swift-storage-0\" (UID: \"8b115803-e57f-4651-8a38-9b1aece05cdf\") " pod="openstack/swift-storage-0" Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.393233 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8b115803-e57f-4651-8a38-9b1aece05cdf-lock\") pod \"swift-storage-0\" (UID: \"8b115803-e57f-4651-8a38-9b1aece05cdf\") " pod="openstack/swift-storage-0" Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.412147 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-994s9\" (UniqueName: \"kubernetes.io/projected/8b115803-e57f-4651-8a38-9b1aece05cdf-kube-api-access-994s9\") pod \"swift-storage-0\" (UID: \"8b115803-e57f-4651-8a38-9b1aece05cdf\") " pod="openstack/swift-storage-0" Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.432612 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"8b115803-e57f-4651-8a38-9b1aece05cdf\") " pod="openstack/swift-storage-0" Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.745720 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-2h97x"] Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.747183 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2h97x" Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.749412 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.749732 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.750026 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.754775 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-2h97x"] Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.899160 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/02dca10f-0051-499a-b8e5-636a18d74f83-dispersionconf\") pod \"swift-ring-rebalance-2h97x\" (UID: \"02dca10f-0051-499a-b8e5-636a18d74f83\") " pod="openstack/swift-ring-rebalance-2h97x" Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.899199 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6nnc\" (UniqueName: \"kubernetes.io/projected/02dca10f-0051-499a-b8e5-636a18d74f83-kube-api-access-z6nnc\") pod \"swift-ring-rebalance-2h97x\" (UID: \"02dca10f-0051-499a-b8e5-636a18d74f83\") " pod="openstack/swift-ring-rebalance-2h97x" Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.899230 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/02dca10f-0051-499a-b8e5-636a18d74f83-ring-data-devices\") pod \"swift-ring-rebalance-2h97x\" (UID: \"02dca10f-0051-499a-b8e5-636a18d74f83\") " pod="openstack/swift-ring-rebalance-2h97x" Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.899307 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02dca10f-0051-499a-b8e5-636a18d74f83-scripts\") pod \"swift-ring-rebalance-2h97x\" (UID: \"02dca10f-0051-499a-b8e5-636a18d74f83\") " pod="openstack/swift-ring-rebalance-2h97x" Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.899351 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02dca10f-0051-499a-b8e5-636a18d74f83-combined-ca-bundle\") pod \"swift-ring-rebalance-2h97x\" (UID: \"02dca10f-0051-499a-b8e5-636a18d74f83\") " pod="openstack/swift-ring-rebalance-2h97x" Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.899404 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/02dca10f-0051-499a-b8e5-636a18d74f83-etc-swift\") pod \"swift-ring-rebalance-2h97x\" (UID: \"02dca10f-0051-499a-b8e5-636a18d74f83\") " pod="openstack/swift-ring-rebalance-2h97x" Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.899459 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b115803-e57f-4651-8a38-9b1aece05cdf-etc-swift\") pod \"swift-storage-0\" (UID: \"8b115803-e57f-4651-8a38-9b1aece05cdf\") " pod="openstack/swift-storage-0" Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.899495 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/02dca10f-0051-499a-b8e5-636a18d74f83-swiftconf\") pod \"swift-ring-rebalance-2h97x\" (UID: \"02dca10f-0051-499a-b8e5-636a18d74f83\") " pod="openstack/swift-ring-rebalance-2h97x" Oct 14 13:31:21 crc kubenswrapper[4725]: E1014 13:31:21.899696 4725 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 14 13:31:21 crc kubenswrapper[4725]: E1014 13:31:21.899719 4725 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 14 13:31:21 crc kubenswrapper[4725]: E1014 13:31:21.899762 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b115803-e57f-4651-8a38-9b1aece05cdf-etc-swift podName:8b115803-e57f-4651-8a38-9b1aece05cdf nodeName:}" failed. No retries permitted until 2025-10-14 13:31:22.899744134 +0000 UTC m=+999.748178943 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8b115803-e57f-4651-8a38-9b1aece05cdf-etc-swift") pod "swift-storage-0" (UID: "8b115803-e57f-4651-8a38-9b1aece05cdf") : configmap "swift-ring-files" not found Oct 14 13:31:21 crc kubenswrapper[4725]: I1014 13:31:21.988293 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-8pdcq" Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.000579 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/02dca10f-0051-499a-b8e5-636a18d74f83-dispersionconf\") pod \"swift-ring-rebalance-2h97x\" (UID: \"02dca10f-0051-499a-b8e5-636a18d74f83\") " pod="openstack/swift-ring-rebalance-2h97x" Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.000632 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6nnc\" (UniqueName: \"kubernetes.io/projected/02dca10f-0051-499a-b8e5-636a18d74f83-kube-api-access-z6nnc\") pod \"swift-ring-rebalance-2h97x\" (UID: \"02dca10f-0051-499a-b8e5-636a18d74f83\") " pod="openstack/swift-ring-rebalance-2h97x" Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.001291 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/02dca10f-0051-499a-b8e5-636a18d74f83-ring-data-devices\") pod \"swift-ring-rebalance-2h97x\" (UID: \"02dca10f-0051-499a-b8e5-636a18d74f83\") " pod="openstack/swift-ring-rebalance-2h97x" Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.001374 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02dca10f-0051-499a-b8e5-636a18d74f83-scripts\") pod \"swift-ring-rebalance-2h97x\" (UID: \"02dca10f-0051-499a-b8e5-636a18d74f83\") " pod="openstack/swift-ring-rebalance-2h97x" Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.001442 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02dca10f-0051-499a-b8e5-636a18d74f83-combined-ca-bundle\") pod \"swift-ring-rebalance-2h97x\" (UID: \"02dca10f-0051-499a-b8e5-636a18d74f83\") " pod="openstack/swift-ring-rebalance-2h97x" Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.001512 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/02dca10f-0051-499a-b8e5-636a18d74f83-etc-swift\") pod \"swift-ring-rebalance-2h97x\" (UID: \"02dca10f-0051-499a-b8e5-636a18d74f83\") " pod="openstack/swift-ring-rebalance-2h97x" Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.001572 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/02dca10f-0051-499a-b8e5-636a18d74f83-swiftconf\") pod \"swift-ring-rebalance-2h97x\" (UID: \"02dca10f-0051-499a-b8e5-636a18d74f83\") " pod="openstack/swift-ring-rebalance-2h97x" Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.002138 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/02dca10f-0051-499a-b8e5-636a18d74f83-ring-data-devices\") pod \"swift-ring-rebalance-2h97x\" (UID: \"02dca10f-0051-499a-b8e5-636a18d74f83\") " pod="openstack/swift-ring-rebalance-2h97x" Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.002773 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02dca10f-0051-499a-b8e5-636a18d74f83-scripts\") pod \"swift-ring-rebalance-2h97x\" (UID: \"02dca10f-0051-499a-b8e5-636a18d74f83\") " pod="openstack/swift-ring-rebalance-2h97x" Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.002444 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/02dca10f-0051-499a-b8e5-636a18d74f83-etc-swift\") pod \"swift-ring-rebalance-2h97x\" (UID: \"02dca10f-0051-499a-b8e5-636a18d74f83\") " pod="openstack/swift-ring-rebalance-2h97x" Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.006984 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/02dca10f-0051-499a-b8e5-636a18d74f83-dispersionconf\") pod \"swift-ring-rebalance-2h97x\" (UID: \"02dca10f-0051-499a-b8e5-636a18d74f83\") " pod="openstack/swift-ring-rebalance-2h97x" Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.007419 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02dca10f-0051-499a-b8e5-636a18d74f83-combined-ca-bundle\") pod \"swift-ring-rebalance-2h97x\" (UID: \"02dca10f-0051-499a-b8e5-636a18d74f83\") " pod="openstack/swift-ring-rebalance-2h97x" Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.009512 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/02dca10f-0051-499a-b8e5-636a18d74f83-swiftconf\") pod \"swift-ring-rebalance-2h97x\" (UID: \"02dca10f-0051-499a-b8e5-636a18d74f83\") " pod="openstack/swift-ring-rebalance-2h97x" Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.020822 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6nnc\" (UniqueName: \"kubernetes.io/projected/02dca10f-0051-499a-b8e5-636a18d74f83-kube-api-access-z6nnc\") pod \"swift-ring-rebalance-2h97x\" (UID: \"02dca10f-0051-499a-b8e5-636a18d74f83\") " pod="openstack/swift-ring-rebalance-2h97x" Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.069248 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2h97x" Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.103150 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3e3f245-5552-4de6-9c38-2f387f6234d6-dns-svc\") pod \"d3e3f245-5552-4de6-9c38-2f387f6234d6\" (UID: \"d3e3f245-5552-4de6-9c38-2f387f6234d6\") " Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.103243 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4c857\" (UniqueName: \"kubernetes.io/projected/d3e3f245-5552-4de6-9c38-2f387f6234d6-kube-api-access-4c857\") pod \"d3e3f245-5552-4de6-9c38-2f387f6234d6\" (UID: \"d3e3f245-5552-4de6-9c38-2f387f6234d6\") " Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.103285 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3e3f245-5552-4de6-9c38-2f387f6234d6-ovsdbserver-nb\") pod \"d3e3f245-5552-4de6-9c38-2f387f6234d6\" (UID: \"d3e3f245-5552-4de6-9c38-2f387f6234d6\") " Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.103379 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3e3f245-5552-4de6-9c38-2f387f6234d6-config\") pod \"d3e3f245-5552-4de6-9c38-2f387f6234d6\" (UID: \"d3e3f245-5552-4de6-9c38-2f387f6234d6\") " Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.110664 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3e3f245-5552-4de6-9c38-2f387f6234d6-kube-api-access-4c857" (OuterVolumeSpecName: "kube-api-access-4c857") pod "d3e3f245-5552-4de6-9c38-2f387f6234d6" (UID: "d3e3f245-5552-4de6-9c38-2f387f6234d6"). InnerVolumeSpecName "kube-api-access-4c857". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.150020 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3e3f245-5552-4de6-9c38-2f387f6234d6-config" (OuterVolumeSpecName: "config") pod "d3e3f245-5552-4de6-9c38-2f387f6234d6" (UID: "d3e3f245-5552-4de6-9c38-2f387f6234d6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.150808 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3e3f245-5552-4de6-9c38-2f387f6234d6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d3e3f245-5552-4de6-9c38-2f387f6234d6" (UID: "d3e3f245-5552-4de6-9c38-2f387f6234d6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.155568 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3e3f245-5552-4de6-9c38-2f387f6234d6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d3e3f245-5552-4de6-9c38-2f387f6234d6" (UID: "d3e3f245-5552-4de6-9c38-2f387f6234d6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.206261 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3e3f245-5552-4de6-9c38-2f387f6234d6-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.206291 4725 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3e3f245-5552-4de6-9c38-2f387f6234d6-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.206304 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4c857\" (UniqueName: \"kubernetes.io/projected/d3e3f245-5552-4de6-9c38-2f387f6234d6-kube-api-access-4c857\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.206381 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3e3f245-5552-4de6-9c38-2f387f6234d6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.318964 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-dx7jg" event={"ID":"5e301bc4-9f27-4a05-84ed-5c194e4244d9","Type":"ContainerStarted","Data":"d64c75cf3106a24f5989dc7584b8e88e2a7a7566d4e03d3878276806e25f76b7"} Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.321073 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-8pdcq" event={"ID":"d3e3f245-5552-4de6-9c38-2f387f6234d6","Type":"ContainerDied","Data":"9c7122cf0bd9bbcdd8f26856b7758ba79e950e6b9d5007497b9f3cd97e7fc195"} Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.321104 4725 scope.go:117] "RemoveContainer" containerID="cc8aaf36d87737dcb9c65fb22dcb90404b7863ec644aeb919b45392a9ce1ea40" Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.321133 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-8pdcq" Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.323929 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-kqvmq" event={"ID":"4dfca925-34cf-487b-a4ad-5a16a8f24b65","Type":"ContainerStarted","Data":"4f89fdc5517f31dcbd79fff79e47536fe39014c4c380886c42baf9a737b3885c"} Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.324743 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-kqvmq" Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.351325 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-dx7jg" podStartSLOduration=3.788471127 podStartE2EDuration="4.351308256s" podCreationTimestamp="2025-10-14 13:31:18 +0000 UTC" firstStartedPulling="2025-10-14 13:31:18.971647995 +0000 UTC m=+995.820082804" lastFinishedPulling="2025-10-14 13:31:19.534485134 +0000 UTC m=+996.382919933" observedRunningTime="2025-10-14 13:31:22.349136268 +0000 UTC m=+999.197571127" watchObservedRunningTime="2025-10-14 13:31:22.351308256 +0000 UTC m=+999.199743075" Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.366101 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0431437b-b27f-4e47-8a60-6aecd1148270","Type":"ContainerStarted","Data":"dd7eb64312c3bab82f7cea182c13b5687b7d15bc3721e82b1ad613f1071c880a"} Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.366154 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0431437b-b27f-4e47-8a60-6aecd1148270","Type":"ContainerStarted","Data":"28e54d11b1e41dc602e2d8d3532a86bfb5de69a08ca72c83d540c9b5d247c81d"} Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.367007 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.387046 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-kqvmq" podStartSLOduration=2.387022088 podStartE2EDuration="2.387022088s" podCreationTimestamp="2025-10-14 13:31:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:31:22.385784504 +0000 UTC m=+999.234219323" watchObservedRunningTime="2025-10-14 13:31:22.387022088 +0000 UTC m=+999.235456907" Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.472522 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8pdcq"] Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.497206 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-8pdcq"] Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.536287 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.294265627 podStartE2EDuration="3.536272217s" podCreationTimestamp="2025-10-14 13:31:19 +0000 UTC" firstStartedPulling="2025-10-14 13:31:20.629721362 +0000 UTC m=+997.478156171" lastFinishedPulling="2025-10-14 13:31:21.871727952 +0000 UTC m=+998.720162761" observedRunningTime="2025-10-14 13:31:22.50582925 +0000 UTC m=+999.354264059" watchObservedRunningTime="2025-10-14 13:31:22.536272217 +0000 UTC m=+999.384707026" Oct 14 13:31:22 crc kubenswrapper[4725]: W1014 13:31:22.540184 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02dca10f_0051_499a_b8e5_636a18d74f83.slice/crio-b5ee5405d95e5502a1f7da630a18bcf5d349c2a329120325a0e13704aef095e4 WatchSource:0}: Error finding container b5ee5405d95e5502a1f7da630a18bcf5d349c2a329120325a0e13704aef095e4: Status 404 returned error can't find the container with id b5ee5405d95e5502a1f7da630a18bcf5d349c2a329120325a0e13704aef095e4 Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.541145 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-2h97x"] Oct 14 13:31:22 crc kubenswrapper[4725]: I1014 13:31:22.926091 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b115803-e57f-4651-8a38-9b1aece05cdf-etc-swift\") pod \"swift-storage-0\" (UID: \"8b115803-e57f-4651-8a38-9b1aece05cdf\") " pod="openstack/swift-storage-0" Oct 14 13:31:22 crc kubenswrapper[4725]: E1014 13:31:22.926274 4725 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 14 13:31:22 crc kubenswrapper[4725]: E1014 13:31:22.926300 4725 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 14 13:31:22 crc kubenswrapper[4725]: E1014 13:31:22.926358 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b115803-e57f-4651-8a38-9b1aece05cdf-etc-swift podName:8b115803-e57f-4651-8a38-9b1aece05cdf nodeName:}" failed. No retries permitted until 2025-10-14 13:31:24.926338587 +0000 UTC m=+1001.774773396 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8b115803-e57f-4651-8a38-9b1aece05cdf-etc-swift") pod "swift-storage-0" (UID: "8b115803-e57f-4651-8a38-9b1aece05cdf") : configmap "swift-ring-files" not found Oct 14 13:31:23 crc kubenswrapper[4725]: I1014 13:31:23.372863 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2h97x" event={"ID":"02dca10f-0051-499a-b8e5-636a18d74f83","Type":"ContainerStarted","Data":"b5ee5405d95e5502a1f7da630a18bcf5d349c2a329120325a0e13704aef095e4"} Oct 14 13:31:23 crc kubenswrapper[4725]: I1014 13:31:23.456200 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-dx7jg" Oct 14 13:31:23 crc kubenswrapper[4725]: I1014 13:31:23.944973 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3e3f245-5552-4de6-9c38-2f387f6234d6" path="/var/lib/kubelet/pods/d3e3f245-5552-4de6-9c38-2f387f6234d6/volumes" Oct 14 13:31:24 crc kubenswrapper[4725]: I1014 13:31:24.964855 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b115803-e57f-4651-8a38-9b1aece05cdf-etc-swift\") pod \"swift-storage-0\" (UID: \"8b115803-e57f-4651-8a38-9b1aece05cdf\") " pod="openstack/swift-storage-0" Oct 14 13:31:24 crc kubenswrapper[4725]: E1014 13:31:24.965051 4725 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 14 13:31:24 crc kubenswrapper[4725]: E1014 13:31:24.965174 4725 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 14 13:31:24 crc kubenswrapper[4725]: E1014 13:31:24.965237 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b115803-e57f-4651-8a38-9b1aece05cdf-etc-swift podName:8b115803-e57f-4651-8a38-9b1aece05cdf nodeName:}" failed. No retries permitted until 2025-10-14 13:31:28.96521567 +0000 UTC m=+1005.813650489 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8b115803-e57f-4651-8a38-9b1aece05cdf-etc-swift") pod "swift-storage-0" (UID: "8b115803-e57f-4651-8a38-9b1aece05cdf") : configmap "swift-ring-files" not found Oct 14 13:31:26 crc kubenswrapper[4725]: I1014 13:31:26.429860 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2h97x" event={"ID":"02dca10f-0051-499a-b8e5-636a18d74f83","Type":"ContainerStarted","Data":"84546163afcafc2804326a47babe87110e78096a7e0d567fe49cecbfd14988ea"} Oct 14 13:31:26 crc kubenswrapper[4725]: I1014 13:31:26.457756 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-2h97x" podStartSLOduration=2.156794965 podStartE2EDuration="5.457728556s" podCreationTimestamp="2025-10-14 13:31:21 +0000 UTC" firstStartedPulling="2025-10-14 13:31:22.542293551 +0000 UTC m=+999.390728360" lastFinishedPulling="2025-10-14 13:31:25.843227132 +0000 UTC m=+1002.691661951" observedRunningTime="2025-10-14 13:31:26.453931243 +0000 UTC m=+1003.302366062" watchObservedRunningTime="2025-10-14 13:31:26.457728556 +0000 UTC m=+1003.306163405" Oct 14 13:31:26 crc kubenswrapper[4725]: I1014 13:31:26.531045 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Oct 14 13:31:26 crc kubenswrapper[4725]: I1014 13:31:26.531134 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Oct 14 13:31:26 crc kubenswrapper[4725]: I1014 13:31:26.597633 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Oct 14 13:31:27 crc kubenswrapper[4725]: I1014 13:31:27.496857 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Oct 14 13:31:27 crc kubenswrapper[4725]: I1014 13:31:27.907510 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Oct 14 13:31:27 crc kubenswrapper[4725]: I1014 13:31:27.907609 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Oct 14 13:31:27 crc kubenswrapper[4725]: I1014 13:31:27.951029 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-bs9b4"] Oct 14 13:31:27 crc kubenswrapper[4725]: E1014 13:31:27.951422 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3e3f245-5552-4de6-9c38-2f387f6234d6" containerName="init" Oct 14 13:31:27 crc kubenswrapper[4725]: I1014 13:31:27.951443 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3e3f245-5552-4de6-9c38-2f387f6234d6" containerName="init" Oct 14 13:31:27 crc kubenswrapper[4725]: I1014 13:31:27.951671 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3e3f245-5552-4de6-9c38-2f387f6234d6" containerName="init" Oct 14 13:31:27 crc kubenswrapper[4725]: I1014 13:31:27.952714 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bs9b4" Oct 14 13:31:27 crc kubenswrapper[4725]: I1014 13:31:27.965350 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-bs9b4"] Oct 14 13:31:27 crc kubenswrapper[4725]: I1014 13:31:27.988006 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Oct 14 13:31:28 crc kubenswrapper[4725]: I1014 13:31:28.112140 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzz4g\" (UniqueName: \"kubernetes.io/projected/4a8d6508-9075-409d-b8d8-ed03113819a1-kube-api-access-dzz4g\") pod \"keystone-db-create-bs9b4\" (UID: \"4a8d6508-9075-409d-b8d8-ed03113819a1\") " pod="openstack/keystone-db-create-bs9b4" Oct 14 13:31:28 crc kubenswrapper[4725]: I1014 13:31:28.171365 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-q7pwj"] Oct 14 13:31:28 crc kubenswrapper[4725]: I1014 13:31:28.172716 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-q7pwj" Oct 14 13:31:28 crc kubenswrapper[4725]: I1014 13:31:28.179582 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-q7pwj"] Oct 14 13:31:28 crc kubenswrapper[4725]: I1014 13:31:28.214167 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzz4g\" (UniqueName: \"kubernetes.io/projected/4a8d6508-9075-409d-b8d8-ed03113819a1-kube-api-access-dzz4g\") pod \"keystone-db-create-bs9b4\" (UID: \"4a8d6508-9075-409d-b8d8-ed03113819a1\") " pod="openstack/keystone-db-create-bs9b4" Oct 14 13:31:28 crc kubenswrapper[4725]: I1014 13:31:28.235269 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzz4g\" (UniqueName: \"kubernetes.io/projected/4a8d6508-9075-409d-b8d8-ed03113819a1-kube-api-access-dzz4g\") pod \"keystone-db-create-bs9b4\" (UID: \"4a8d6508-9075-409d-b8d8-ed03113819a1\") " pod="openstack/keystone-db-create-bs9b4" Oct 14 13:31:28 crc kubenswrapper[4725]: I1014 13:31:28.295426 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bs9b4" Oct 14 13:31:28 crc kubenswrapper[4725]: I1014 13:31:28.316079 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckkjg\" (UniqueName: \"kubernetes.io/projected/1fca1233-9113-4f55-8632-c75d41eabd80-kube-api-access-ckkjg\") pod \"placement-db-create-q7pwj\" (UID: \"1fca1233-9113-4f55-8632-c75d41eabd80\") " pod="openstack/placement-db-create-q7pwj" Oct 14 13:31:28 crc kubenswrapper[4725]: I1014 13:31:28.441101 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckkjg\" (UniqueName: \"kubernetes.io/projected/1fca1233-9113-4f55-8632-c75d41eabd80-kube-api-access-ckkjg\") pod \"placement-db-create-q7pwj\" (UID: \"1fca1233-9113-4f55-8632-c75d41eabd80\") " pod="openstack/placement-db-create-q7pwj" Oct 14 13:31:28 crc kubenswrapper[4725]: I1014 13:31:28.458180 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-dx7jg" Oct 14 13:31:28 crc kubenswrapper[4725]: I1014 13:31:28.474761 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-7598j"] Oct 14 13:31:28 crc kubenswrapper[4725]: I1014 13:31:28.476065 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckkjg\" (UniqueName: \"kubernetes.io/projected/1fca1233-9113-4f55-8632-c75d41eabd80-kube-api-access-ckkjg\") pod \"placement-db-create-q7pwj\" (UID: \"1fca1233-9113-4f55-8632-c75d41eabd80\") " pod="openstack/placement-db-create-q7pwj" Oct 14 13:31:28 crc kubenswrapper[4725]: I1014 13:31:28.476382 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7598j" Oct 14 13:31:28 crc kubenswrapper[4725]: I1014 13:31:28.487669 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-7598j"] Oct 14 13:31:28 crc kubenswrapper[4725]: I1014 13:31:28.489365 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-q7pwj" Oct 14 13:31:28 crc kubenswrapper[4725]: I1014 13:31:28.558924 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Oct 14 13:31:28 crc kubenswrapper[4725]: I1014 13:31:28.649309 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqzwb\" (UniqueName: \"kubernetes.io/projected/2c7bd34b-5329-4592-a439-7d5eaf070bfa-kube-api-access-pqzwb\") pod \"glance-db-create-7598j\" (UID: \"2c7bd34b-5329-4592-a439-7d5eaf070bfa\") " pod="openstack/glance-db-create-7598j" Oct 14 13:31:28 crc kubenswrapper[4725]: I1014 13:31:28.751385 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqzwb\" (UniqueName: \"kubernetes.io/projected/2c7bd34b-5329-4592-a439-7d5eaf070bfa-kube-api-access-pqzwb\") pod \"glance-db-create-7598j\" (UID: \"2c7bd34b-5329-4592-a439-7d5eaf070bfa\") " pod="openstack/glance-db-create-7598j" Oct 14 13:31:28 crc kubenswrapper[4725]: I1014 13:31:28.767627 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqzwb\" (UniqueName: \"kubernetes.io/projected/2c7bd34b-5329-4592-a439-7d5eaf070bfa-kube-api-access-pqzwb\") pod \"glance-db-create-7598j\" (UID: \"2c7bd34b-5329-4592-a439-7d5eaf070bfa\") " pod="openstack/glance-db-create-7598j" Oct 14 13:31:28 crc kubenswrapper[4725]: I1014 13:31:28.839440 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-bs9b4"] Oct 14 13:31:28 crc kubenswrapper[4725]: I1014 13:31:28.878381 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7598j" Oct 14 13:31:29 crc kubenswrapper[4725]: I1014 13:31:28.999588 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-q7pwj"] Oct 14 13:31:29 crc kubenswrapper[4725]: W1014 13:31:29.007812 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fca1233_9113_4f55_8632_c75d41eabd80.slice/crio-96989d912c241ece1e1ce59a83351febde960a242b0b830e4e77f67598fa3229 WatchSource:0}: Error finding container 96989d912c241ece1e1ce59a83351febde960a242b0b830e4e77f67598fa3229: Status 404 returned error can't find the container with id 96989d912c241ece1e1ce59a83351febde960a242b0b830e4e77f67598fa3229 Oct 14 13:31:29 crc kubenswrapper[4725]: I1014 13:31:29.058586 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b115803-e57f-4651-8a38-9b1aece05cdf-etc-swift\") pod \"swift-storage-0\" (UID: \"8b115803-e57f-4651-8a38-9b1aece05cdf\") " pod="openstack/swift-storage-0" Oct 14 13:31:29 crc kubenswrapper[4725]: E1014 13:31:29.058823 4725 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 14 13:31:29 crc kubenswrapper[4725]: E1014 13:31:29.058860 4725 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 14 13:31:29 crc kubenswrapper[4725]: E1014 13:31:29.058922 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b115803-e57f-4651-8a38-9b1aece05cdf-etc-swift podName:8b115803-e57f-4651-8a38-9b1aece05cdf nodeName:}" failed. No retries permitted until 2025-10-14 13:31:37.058900985 +0000 UTC m=+1013.907335794 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8b115803-e57f-4651-8a38-9b1aece05cdf-etc-swift") pod "swift-storage-0" (UID: "8b115803-e57f-4651-8a38-9b1aece05cdf") : configmap "swift-ring-files" not found Oct 14 13:31:29 crc kubenswrapper[4725]: I1014 13:31:29.313248 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-7598j"] Oct 14 13:31:29 crc kubenswrapper[4725]: W1014 13:31:29.324743 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c7bd34b_5329_4592_a439_7d5eaf070bfa.slice/crio-fca9fa0639888b248715837bd57dae0e2d2aee211f6d51c90631add7c5cc0804 WatchSource:0}: Error finding container fca9fa0639888b248715837bd57dae0e2d2aee211f6d51c90631add7c5cc0804: Status 404 returned error can't find the container with id fca9fa0639888b248715837bd57dae0e2d2aee211f6d51c90631add7c5cc0804 Oct 14 13:31:29 crc kubenswrapper[4725]: I1014 13:31:29.461120 4725 generic.go:334] "Generic (PLEG): container finished" podID="1fca1233-9113-4f55-8632-c75d41eabd80" containerID="78b8efc9be8f28637bd3f5011b9ff51a11508f7a312234900bcf5a21ba6071a1" exitCode=0 Oct 14 13:31:29 crc kubenswrapper[4725]: I1014 13:31:29.461213 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-q7pwj" event={"ID":"1fca1233-9113-4f55-8632-c75d41eabd80","Type":"ContainerDied","Data":"78b8efc9be8f28637bd3f5011b9ff51a11508f7a312234900bcf5a21ba6071a1"} Oct 14 13:31:29 crc kubenswrapper[4725]: I1014 13:31:29.461248 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-q7pwj" event={"ID":"1fca1233-9113-4f55-8632-c75d41eabd80","Type":"ContainerStarted","Data":"96989d912c241ece1e1ce59a83351febde960a242b0b830e4e77f67598fa3229"} Oct 14 13:31:29 crc kubenswrapper[4725]: I1014 13:31:29.464076 4725 generic.go:334] "Generic (PLEG): container finished" podID="4a8d6508-9075-409d-b8d8-ed03113819a1" containerID="1b458b268dfcbfa776d3cc7d4ab7d5ddf51a8261c46fac13cdb54a3e77abfb0a" exitCode=0 Oct 14 13:31:29 crc kubenswrapper[4725]: I1014 13:31:29.464355 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bs9b4" event={"ID":"4a8d6508-9075-409d-b8d8-ed03113819a1","Type":"ContainerDied","Data":"1b458b268dfcbfa776d3cc7d4ab7d5ddf51a8261c46fac13cdb54a3e77abfb0a"} Oct 14 13:31:29 crc kubenswrapper[4725]: I1014 13:31:29.464572 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bs9b4" event={"ID":"4a8d6508-9075-409d-b8d8-ed03113819a1","Type":"ContainerStarted","Data":"3643a01c314fe20984c1848ad3a6fb4f6d8b192547c1e5034cdc647f379dd773"} Oct 14 13:31:29 crc kubenswrapper[4725]: I1014 13:31:29.466773 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7598j" event={"ID":"2c7bd34b-5329-4592-a439-7d5eaf070bfa","Type":"ContainerStarted","Data":"fca9fa0639888b248715837bd57dae0e2d2aee211f6d51c90631add7c5cc0804"} Oct 14 13:31:30 crc kubenswrapper[4725]: I1014 13:31:30.475490 4725 generic.go:334] "Generic (PLEG): container finished" podID="2c7bd34b-5329-4592-a439-7d5eaf070bfa" containerID="46002d2a081ca4d5eff82609f27fe46ec88e562c31779ebb6fbc39e248ea1225" exitCode=0 Oct 14 13:31:30 crc kubenswrapper[4725]: I1014 13:31:30.475539 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7598j" event={"ID":"2c7bd34b-5329-4592-a439-7d5eaf070bfa","Type":"ContainerDied","Data":"46002d2a081ca4d5eff82609f27fe46ec88e562c31779ebb6fbc39e248ea1225"} Oct 14 13:31:30 crc kubenswrapper[4725]: I1014 13:31:30.537061 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-kqvmq" Oct 14 13:31:30 crc kubenswrapper[4725]: I1014 13:31:30.630874 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-dx7jg"] Oct 14 13:31:30 crc kubenswrapper[4725]: I1014 13:31:30.631190 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-dx7jg" podUID="5e301bc4-9f27-4a05-84ed-5c194e4244d9" containerName="dnsmasq-dns" containerID="cri-o://d64c75cf3106a24f5989dc7584b8e88e2a7a7566d4e03d3878276806e25f76b7" gracePeriod=10 Oct 14 13:31:30 crc kubenswrapper[4725]: I1014 13:31:30.804281 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-q7pwj" Oct 14 13:31:30 crc kubenswrapper[4725]: I1014 13:31:30.938284 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bs9b4" Oct 14 13:31:30 crc kubenswrapper[4725]: I1014 13:31:30.995208 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckkjg\" (UniqueName: \"kubernetes.io/projected/1fca1233-9113-4f55-8632-c75d41eabd80-kube-api-access-ckkjg\") pod \"1fca1233-9113-4f55-8632-c75d41eabd80\" (UID: \"1fca1233-9113-4f55-8632-c75d41eabd80\") " Oct 14 13:31:31 crc kubenswrapper[4725]: I1014 13:31:31.001351 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fca1233-9113-4f55-8632-c75d41eabd80-kube-api-access-ckkjg" (OuterVolumeSpecName: "kube-api-access-ckkjg") pod "1fca1233-9113-4f55-8632-c75d41eabd80" (UID: "1fca1233-9113-4f55-8632-c75d41eabd80"). InnerVolumeSpecName "kube-api-access-ckkjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:31:31 crc kubenswrapper[4725]: I1014 13:31:31.096616 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzz4g\" (UniqueName: \"kubernetes.io/projected/4a8d6508-9075-409d-b8d8-ed03113819a1-kube-api-access-dzz4g\") pod \"4a8d6508-9075-409d-b8d8-ed03113819a1\" (UID: \"4a8d6508-9075-409d-b8d8-ed03113819a1\") " Oct 14 13:31:31 crc kubenswrapper[4725]: I1014 13:31:31.097138 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckkjg\" (UniqueName: \"kubernetes.io/projected/1fca1233-9113-4f55-8632-c75d41eabd80-kube-api-access-ckkjg\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:31 crc kubenswrapper[4725]: I1014 13:31:31.112701 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a8d6508-9075-409d-b8d8-ed03113819a1-kube-api-access-dzz4g" (OuterVolumeSpecName: "kube-api-access-dzz4g") pod "4a8d6508-9075-409d-b8d8-ed03113819a1" (UID: "4a8d6508-9075-409d-b8d8-ed03113819a1"). InnerVolumeSpecName "kube-api-access-dzz4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:31:31 crc kubenswrapper[4725]: I1014 13:31:31.198074 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzz4g\" (UniqueName: \"kubernetes.io/projected/4a8d6508-9075-409d-b8d8-ed03113819a1-kube-api-access-dzz4g\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:31 crc kubenswrapper[4725]: I1014 13:31:31.485716 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bs9b4" event={"ID":"4a8d6508-9075-409d-b8d8-ed03113819a1","Type":"ContainerDied","Data":"3643a01c314fe20984c1848ad3a6fb4f6d8b192547c1e5034cdc647f379dd773"} Oct 14 13:31:31 crc kubenswrapper[4725]: I1014 13:31:31.486003 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3643a01c314fe20984c1848ad3a6fb4f6d8b192547c1e5034cdc647f379dd773" Oct 14 13:31:31 crc kubenswrapper[4725]: I1014 13:31:31.486058 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bs9b4" Oct 14 13:31:31 crc kubenswrapper[4725]: I1014 13:31:31.492856 4725 generic.go:334] "Generic (PLEG): container finished" podID="5e301bc4-9f27-4a05-84ed-5c194e4244d9" containerID="d64c75cf3106a24f5989dc7584b8e88e2a7a7566d4e03d3878276806e25f76b7" exitCode=0 Oct 14 13:31:31 crc kubenswrapper[4725]: I1014 13:31:31.492891 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-dx7jg" event={"ID":"5e301bc4-9f27-4a05-84ed-5c194e4244d9","Type":"ContainerDied","Data":"d64c75cf3106a24f5989dc7584b8e88e2a7a7566d4e03d3878276806e25f76b7"} Oct 14 13:31:31 crc kubenswrapper[4725]: I1014 13:31:31.494470 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-q7pwj" event={"ID":"1fca1233-9113-4f55-8632-c75d41eabd80","Type":"ContainerDied","Data":"96989d912c241ece1e1ce59a83351febde960a242b0b830e4e77f67598fa3229"} Oct 14 13:31:31 crc kubenswrapper[4725]: I1014 13:31:31.494485 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-q7pwj" Oct 14 13:31:31 crc kubenswrapper[4725]: I1014 13:31:31.494504 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96989d912c241ece1e1ce59a83351febde960a242b0b830e4e77f67598fa3229" Oct 14 13:31:31 crc kubenswrapper[4725]: I1014 13:31:31.810602 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7598j" Oct 14 13:31:31 crc kubenswrapper[4725]: I1014 13:31:31.910157 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqzwb\" (UniqueName: \"kubernetes.io/projected/2c7bd34b-5329-4592-a439-7d5eaf070bfa-kube-api-access-pqzwb\") pod \"2c7bd34b-5329-4592-a439-7d5eaf070bfa\" (UID: \"2c7bd34b-5329-4592-a439-7d5eaf070bfa\") " Oct 14 13:31:31 crc kubenswrapper[4725]: I1014 13:31:31.915228 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c7bd34b-5329-4592-a439-7d5eaf070bfa-kube-api-access-pqzwb" (OuterVolumeSpecName: "kube-api-access-pqzwb") pod "2c7bd34b-5329-4592-a439-7d5eaf070bfa" (UID: "2c7bd34b-5329-4592-a439-7d5eaf070bfa"). InnerVolumeSpecName "kube-api-access-pqzwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:31:32 crc kubenswrapper[4725]: I1014 13:31:32.012378 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqzwb\" (UniqueName: \"kubernetes.io/projected/2c7bd34b-5329-4592-a439-7d5eaf070bfa-kube-api-access-pqzwb\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:32 crc kubenswrapper[4725]: I1014 13:31:32.388384 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-dx7jg" Oct 14 13:31:32 crc kubenswrapper[4725]: I1014 13:31:32.504886 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7598j" Oct 14 13:31:32 crc kubenswrapper[4725]: I1014 13:31:32.504891 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7598j" event={"ID":"2c7bd34b-5329-4592-a439-7d5eaf070bfa","Type":"ContainerDied","Data":"fca9fa0639888b248715837bd57dae0e2d2aee211f6d51c90631add7c5cc0804"} Oct 14 13:31:32 crc kubenswrapper[4725]: I1014 13:31:32.504939 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fca9fa0639888b248715837bd57dae0e2d2aee211f6d51c90631add7c5cc0804" Oct 14 13:31:32 crc kubenswrapper[4725]: I1014 13:31:32.508911 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-dx7jg" event={"ID":"5e301bc4-9f27-4a05-84ed-5c194e4244d9","Type":"ContainerDied","Data":"048ae68be5ceee89983f408d812930dc04a297831b9a26e64c596423e8bbda63"} Oct 14 13:31:32 crc kubenswrapper[4725]: I1014 13:31:32.508966 4725 scope.go:117] "RemoveContainer" containerID="d64c75cf3106a24f5989dc7584b8e88e2a7a7566d4e03d3878276806e25f76b7" Oct 14 13:31:32 crc kubenswrapper[4725]: I1014 13:31:32.508996 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-dx7jg" Oct 14 13:31:32 crc kubenswrapper[4725]: I1014 13:31:32.521139 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jzbn\" (UniqueName: \"kubernetes.io/projected/5e301bc4-9f27-4a05-84ed-5c194e4244d9-kube-api-access-6jzbn\") pod \"5e301bc4-9f27-4a05-84ed-5c194e4244d9\" (UID: \"5e301bc4-9f27-4a05-84ed-5c194e4244d9\") " Oct 14 13:31:32 crc kubenswrapper[4725]: I1014 13:31:32.521294 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e301bc4-9f27-4a05-84ed-5c194e4244d9-ovsdbserver-nb\") pod \"5e301bc4-9f27-4a05-84ed-5c194e4244d9\" (UID: \"5e301bc4-9f27-4a05-84ed-5c194e4244d9\") " Oct 14 13:31:32 crc kubenswrapper[4725]: I1014 13:31:32.521338 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e301bc4-9f27-4a05-84ed-5c194e4244d9-dns-svc\") pod \"5e301bc4-9f27-4a05-84ed-5c194e4244d9\" (UID: \"5e301bc4-9f27-4a05-84ed-5c194e4244d9\") " Oct 14 13:31:32 crc kubenswrapper[4725]: I1014 13:31:32.521382 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e301bc4-9f27-4a05-84ed-5c194e4244d9-ovsdbserver-sb\") pod \"5e301bc4-9f27-4a05-84ed-5c194e4244d9\" (UID: \"5e301bc4-9f27-4a05-84ed-5c194e4244d9\") " Oct 14 13:31:32 crc kubenswrapper[4725]: I1014 13:31:32.521425 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e301bc4-9f27-4a05-84ed-5c194e4244d9-config\") pod \"5e301bc4-9f27-4a05-84ed-5c194e4244d9\" (UID: \"5e301bc4-9f27-4a05-84ed-5c194e4244d9\") " Oct 14 13:31:32 crc kubenswrapper[4725]: I1014 13:31:32.529228 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e301bc4-9f27-4a05-84ed-5c194e4244d9-kube-api-access-6jzbn" (OuterVolumeSpecName: "kube-api-access-6jzbn") pod "5e301bc4-9f27-4a05-84ed-5c194e4244d9" (UID: "5e301bc4-9f27-4a05-84ed-5c194e4244d9"). InnerVolumeSpecName "kube-api-access-6jzbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:31:32 crc kubenswrapper[4725]: I1014 13:31:32.533860 4725 scope.go:117] "RemoveContainer" containerID="de8aebdbdcb14d602266c7e3f4aada7ce3f6ed4ccbb5304610a7f9b3cf3f75ba" Oct 14 13:31:32 crc kubenswrapper[4725]: I1014 13:31:32.570668 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e301bc4-9f27-4a05-84ed-5c194e4244d9-config" (OuterVolumeSpecName: "config") pod "5e301bc4-9f27-4a05-84ed-5c194e4244d9" (UID: "5e301bc4-9f27-4a05-84ed-5c194e4244d9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:31:32 crc kubenswrapper[4725]: I1014 13:31:32.587831 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e301bc4-9f27-4a05-84ed-5c194e4244d9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5e301bc4-9f27-4a05-84ed-5c194e4244d9" (UID: "5e301bc4-9f27-4a05-84ed-5c194e4244d9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:31:32 crc kubenswrapper[4725]: I1014 13:31:32.588592 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e301bc4-9f27-4a05-84ed-5c194e4244d9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5e301bc4-9f27-4a05-84ed-5c194e4244d9" (UID: "5e301bc4-9f27-4a05-84ed-5c194e4244d9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:31:32 crc kubenswrapper[4725]: I1014 13:31:32.592107 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e301bc4-9f27-4a05-84ed-5c194e4244d9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5e301bc4-9f27-4a05-84ed-5c194e4244d9" (UID: "5e301bc4-9f27-4a05-84ed-5c194e4244d9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:31:32 crc kubenswrapper[4725]: I1014 13:31:32.624229 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jzbn\" (UniqueName: \"kubernetes.io/projected/5e301bc4-9f27-4a05-84ed-5c194e4244d9-kube-api-access-6jzbn\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:32 crc kubenswrapper[4725]: I1014 13:31:32.624285 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e301bc4-9f27-4a05-84ed-5c194e4244d9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:32 crc kubenswrapper[4725]: I1014 13:31:32.624305 4725 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e301bc4-9f27-4a05-84ed-5c194e4244d9-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:32 crc kubenswrapper[4725]: I1014 13:31:32.624325 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e301bc4-9f27-4a05-84ed-5c194e4244d9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:32 crc kubenswrapper[4725]: I1014 13:31:32.624344 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e301bc4-9f27-4a05-84ed-5c194e4244d9-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:32 crc kubenswrapper[4725]: I1014 13:31:32.858128 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-dx7jg"] Oct 14 13:31:32 crc kubenswrapper[4725]: I1014 13:31:32.863173 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-dx7jg"] Oct 14 13:31:33 crc kubenswrapper[4725]: I1014 13:31:33.519663 4725 generic.go:334] "Generic (PLEG): container finished" podID="02dca10f-0051-499a-b8e5-636a18d74f83" containerID="84546163afcafc2804326a47babe87110e78096a7e0d567fe49cecbfd14988ea" exitCode=0 Oct 14 13:31:33 crc kubenswrapper[4725]: I1014 13:31:33.519815 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2h97x" event={"ID":"02dca10f-0051-499a-b8e5-636a18d74f83","Type":"ContainerDied","Data":"84546163afcafc2804326a47babe87110e78096a7e0d567fe49cecbfd14988ea"} Oct 14 13:31:33 crc kubenswrapper[4725]: I1014 13:31:33.934769 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e301bc4-9f27-4a05-84ed-5c194e4244d9" path="/var/lib/kubelet/pods/5e301bc4-9f27-4a05-84ed-5c194e4244d9/volumes" Oct 14 13:31:34 crc kubenswrapper[4725]: I1014 13:31:34.887868 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2h97x" Oct 14 13:31:35 crc kubenswrapper[4725]: I1014 13:31:35.038556 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Oct 14 13:31:35 crc kubenswrapper[4725]: I1014 13:31:35.072598 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/02dca10f-0051-499a-b8e5-636a18d74f83-swiftconf\") pod \"02dca10f-0051-499a-b8e5-636a18d74f83\" (UID: \"02dca10f-0051-499a-b8e5-636a18d74f83\") " Oct 14 13:31:35 crc kubenswrapper[4725]: I1014 13:31:35.072676 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6nnc\" (UniqueName: \"kubernetes.io/projected/02dca10f-0051-499a-b8e5-636a18d74f83-kube-api-access-z6nnc\") pod \"02dca10f-0051-499a-b8e5-636a18d74f83\" (UID: \"02dca10f-0051-499a-b8e5-636a18d74f83\") " Oct 14 13:31:35 crc kubenswrapper[4725]: I1014 13:31:35.072734 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02dca10f-0051-499a-b8e5-636a18d74f83-combined-ca-bundle\") pod \"02dca10f-0051-499a-b8e5-636a18d74f83\" (UID: \"02dca10f-0051-499a-b8e5-636a18d74f83\") " Oct 14 13:31:35 crc kubenswrapper[4725]: I1014 13:31:35.072791 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02dca10f-0051-499a-b8e5-636a18d74f83-scripts\") pod \"02dca10f-0051-499a-b8e5-636a18d74f83\" (UID: \"02dca10f-0051-499a-b8e5-636a18d74f83\") " Oct 14 13:31:35 crc kubenswrapper[4725]: I1014 13:31:35.072869 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/02dca10f-0051-499a-b8e5-636a18d74f83-etc-swift\") pod \"02dca10f-0051-499a-b8e5-636a18d74f83\" (UID: \"02dca10f-0051-499a-b8e5-636a18d74f83\") " Oct 14 13:31:35 crc kubenswrapper[4725]: I1014 13:31:35.072917 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/02dca10f-0051-499a-b8e5-636a18d74f83-ring-data-devices\") pod \"02dca10f-0051-499a-b8e5-636a18d74f83\" (UID: \"02dca10f-0051-499a-b8e5-636a18d74f83\") " Oct 14 13:31:35 crc kubenswrapper[4725]: I1014 13:31:35.072994 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/02dca10f-0051-499a-b8e5-636a18d74f83-dispersionconf\") pod \"02dca10f-0051-499a-b8e5-636a18d74f83\" (UID: \"02dca10f-0051-499a-b8e5-636a18d74f83\") " Oct 14 13:31:35 crc kubenswrapper[4725]: I1014 13:31:35.074953 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02dca10f-0051-499a-b8e5-636a18d74f83-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "02dca10f-0051-499a-b8e5-636a18d74f83" (UID: "02dca10f-0051-499a-b8e5-636a18d74f83"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:31:35 crc kubenswrapper[4725]: I1014 13:31:35.077106 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02dca10f-0051-499a-b8e5-636a18d74f83-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "02dca10f-0051-499a-b8e5-636a18d74f83" (UID: "02dca10f-0051-499a-b8e5-636a18d74f83"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:31:35 crc kubenswrapper[4725]: I1014 13:31:35.079770 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02dca10f-0051-499a-b8e5-636a18d74f83-kube-api-access-z6nnc" (OuterVolumeSpecName: "kube-api-access-z6nnc") pod "02dca10f-0051-499a-b8e5-636a18d74f83" (UID: "02dca10f-0051-499a-b8e5-636a18d74f83"). InnerVolumeSpecName "kube-api-access-z6nnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:31:35 crc kubenswrapper[4725]: I1014 13:31:35.096486 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02dca10f-0051-499a-b8e5-636a18d74f83-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "02dca10f-0051-499a-b8e5-636a18d74f83" (UID: "02dca10f-0051-499a-b8e5-636a18d74f83"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:31:35 crc kubenswrapper[4725]: I1014 13:31:35.109346 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02dca10f-0051-499a-b8e5-636a18d74f83-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02dca10f-0051-499a-b8e5-636a18d74f83" (UID: "02dca10f-0051-499a-b8e5-636a18d74f83"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:31:35 crc kubenswrapper[4725]: I1014 13:31:35.120261 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02dca10f-0051-499a-b8e5-636a18d74f83-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "02dca10f-0051-499a-b8e5-636a18d74f83" (UID: "02dca10f-0051-499a-b8e5-636a18d74f83"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:31:35 crc kubenswrapper[4725]: I1014 13:31:35.125205 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02dca10f-0051-499a-b8e5-636a18d74f83-scripts" (OuterVolumeSpecName: "scripts") pod "02dca10f-0051-499a-b8e5-636a18d74f83" (UID: "02dca10f-0051-499a-b8e5-636a18d74f83"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:31:35 crc kubenswrapper[4725]: I1014 13:31:35.175598 4725 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/02dca10f-0051-499a-b8e5-636a18d74f83-dispersionconf\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:35 crc kubenswrapper[4725]: I1014 13:31:35.175638 4725 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/02dca10f-0051-499a-b8e5-636a18d74f83-swiftconf\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:35 crc kubenswrapper[4725]: I1014 13:31:35.175650 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6nnc\" (UniqueName: \"kubernetes.io/projected/02dca10f-0051-499a-b8e5-636a18d74f83-kube-api-access-z6nnc\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:35 crc kubenswrapper[4725]: I1014 13:31:35.175660 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02dca10f-0051-499a-b8e5-636a18d74f83-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:35 crc kubenswrapper[4725]: I1014 13:31:35.175669 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02dca10f-0051-499a-b8e5-636a18d74f83-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:35 crc kubenswrapper[4725]: I1014 13:31:35.175677 4725 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/02dca10f-0051-499a-b8e5-636a18d74f83-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:35 crc kubenswrapper[4725]: I1014 13:31:35.175685 4725 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/02dca10f-0051-499a-b8e5-636a18d74f83-ring-data-devices\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:35 crc kubenswrapper[4725]: I1014 13:31:35.539977 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2h97x" event={"ID":"02dca10f-0051-499a-b8e5-636a18d74f83","Type":"ContainerDied","Data":"b5ee5405d95e5502a1f7da630a18bcf5d349c2a329120325a0e13704aef095e4"} Oct 14 13:31:35 crc kubenswrapper[4725]: I1014 13:31:35.540036 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5ee5405d95e5502a1f7da630a18bcf5d349c2a329120325a0e13704aef095e4" Oct 14 13:31:35 crc kubenswrapper[4725]: I1014 13:31:35.540124 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2h97x" Oct 14 13:31:37 crc kubenswrapper[4725]: I1014 13:31:37.110399 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b115803-e57f-4651-8a38-9b1aece05cdf-etc-swift\") pod \"swift-storage-0\" (UID: \"8b115803-e57f-4651-8a38-9b1aece05cdf\") " pod="openstack/swift-storage-0" Oct 14 13:31:37 crc kubenswrapper[4725]: I1014 13:31:37.118024 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b115803-e57f-4651-8a38-9b1aece05cdf-etc-swift\") pod \"swift-storage-0\" (UID: \"8b115803-e57f-4651-8a38-9b1aece05cdf\") " pod="openstack/swift-storage-0" Oct 14 13:31:37 crc kubenswrapper[4725]: I1014 13:31:37.177892 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 14 13:31:37 crc kubenswrapper[4725]: I1014 13:31:37.780461 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Oct 14 13:31:37 crc kubenswrapper[4725]: I1014 13:31:37.971298 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-9110-account-create-rdmx4"] Oct 14 13:31:37 crc kubenswrapper[4725]: E1014 13:31:37.971613 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a8d6508-9075-409d-b8d8-ed03113819a1" containerName="mariadb-database-create" Oct 14 13:31:37 crc kubenswrapper[4725]: I1014 13:31:37.971625 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a8d6508-9075-409d-b8d8-ed03113819a1" containerName="mariadb-database-create" Oct 14 13:31:37 crc kubenswrapper[4725]: E1014 13:31:37.971637 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e301bc4-9f27-4a05-84ed-5c194e4244d9" containerName="dnsmasq-dns" Oct 14 13:31:37 crc kubenswrapper[4725]: I1014 13:31:37.971644 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e301bc4-9f27-4a05-84ed-5c194e4244d9" containerName="dnsmasq-dns" Oct 14 13:31:37 crc kubenswrapper[4725]: E1014 13:31:37.971655 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c7bd34b-5329-4592-a439-7d5eaf070bfa" containerName="mariadb-database-create" Oct 14 13:31:37 crc kubenswrapper[4725]: I1014 13:31:37.971661 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c7bd34b-5329-4592-a439-7d5eaf070bfa" containerName="mariadb-database-create" Oct 14 13:31:37 crc kubenswrapper[4725]: E1014 13:31:37.971675 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fca1233-9113-4f55-8632-c75d41eabd80" containerName="mariadb-database-create" Oct 14 13:31:37 crc kubenswrapper[4725]: I1014 13:31:37.971681 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fca1233-9113-4f55-8632-c75d41eabd80" containerName="mariadb-database-create" Oct 14 13:31:37 crc kubenswrapper[4725]: E1014 13:31:37.971690 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02dca10f-0051-499a-b8e5-636a18d74f83" containerName="swift-ring-rebalance" Oct 14 13:31:37 crc kubenswrapper[4725]: I1014 13:31:37.971696 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="02dca10f-0051-499a-b8e5-636a18d74f83" containerName="swift-ring-rebalance" Oct 14 13:31:37 crc kubenswrapper[4725]: E1014 13:31:37.971711 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e301bc4-9f27-4a05-84ed-5c194e4244d9" containerName="init" Oct 14 13:31:37 crc kubenswrapper[4725]: I1014 13:31:37.971717 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e301bc4-9f27-4a05-84ed-5c194e4244d9" containerName="init" Oct 14 13:31:37 crc kubenswrapper[4725]: I1014 13:31:37.971854 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c7bd34b-5329-4592-a439-7d5eaf070bfa" containerName="mariadb-database-create" Oct 14 13:31:37 crc kubenswrapper[4725]: I1014 13:31:37.971871 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e301bc4-9f27-4a05-84ed-5c194e4244d9" containerName="dnsmasq-dns" Oct 14 13:31:37 crc kubenswrapper[4725]: I1014 13:31:37.971878 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="02dca10f-0051-499a-b8e5-636a18d74f83" containerName="swift-ring-rebalance" Oct 14 13:31:37 crc kubenswrapper[4725]: I1014 13:31:37.971893 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fca1233-9113-4f55-8632-c75d41eabd80" containerName="mariadb-database-create" Oct 14 13:31:37 crc kubenswrapper[4725]: I1014 13:31:37.971903 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a8d6508-9075-409d-b8d8-ed03113819a1" containerName="mariadb-database-create" Oct 14 13:31:37 crc kubenswrapper[4725]: I1014 13:31:37.972361 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9110-account-create-rdmx4" Oct 14 13:31:37 crc kubenswrapper[4725]: I1014 13:31:37.974052 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Oct 14 13:31:37 crc kubenswrapper[4725]: I1014 13:31:37.981105 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-9110-account-create-rdmx4"] Oct 14 13:31:38 crc kubenswrapper[4725]: I1014 13:31:38.024064 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lktd\" (UniqueName: \"kubernetes.io/projected/20827da8-286c-48f4-ae94-d6de62502d1f-kube-api-access-4lktd\") pod \"keystone-9110-account-create-rdmx4\" (UID: \"20827da8-286c-48f4-ae94-d6de62502d1f\") " pod="openstack/keystone-9110-account-create-rdmx4" Oct 14 13:31:38 crc kubenswrapper[4725]: I1014 13:31:38.125271 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lktd\" (UniqueName: \"kubernetes.io/projected/20827da8-286c-48f4-ae94-d6de62502d1f-kube-api-access-4lktd\") pod \"keystone-9110-account-create-rdmx4\" (UID: \"20827da8-286c-48f4-ae94-d6de62502d1f\") " pod="openstack/keystone-9110-account-create-rdmx4" Oct 14 13:31:38 crc kubenswrapper[4725]: I1014 13:31:38.150663 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lktd\" (UniqueName: \"kubernetes.io/projected/20827da8-286c-48f4-ae94-d6de62502d1f-kube-api-access-4lktd\") pod \"keystone-9110-account-create-rdmx4\" (UID: \"20827da8-286c-48f4-ae94-d6de62502d1f\") " pod="openstack/keystone-9110-account-create-rdmx4" Oct 14 13:31:38 crc kubenswrapper[4725]: I1014 13:31:38.277221 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-ae98-account-create-cl2rs"] Oct 14 13:31:38 crc kubenswrapper[4725]: I1014 13:31:38.279102 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ae98-account-create-cl2rs" Oct 14 13:31:38 crc kubenswrapper[4725]: I1014 13:31:38.285787 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ae98-account-create-cl2rs"] Oct 14 13:31:38 crc kubenswrapper[4725]: I1014 13:31:38.286388 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Oct 14 13:31:38 crc kubenswrapper[4725]: I1014 13:31:38.299049 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9110-account-create-rdmx4" Oct 14 13:31:38 crc kubenswrapper[4725]: I1014 13:31:38.429737 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzvtm\" (UniqueName: \"kubernetes.io/projected/c2915d57-ca9a-4fff-ad1c-c51ae4f89775-kube-api-access-jzvtm\") pod \"placement-ae98-account-create-cl2rs\" (UID: \"c2915d57-ca9a-4fff-ad1c-c51ae4f89775\") " pod="openstack/placement-ae98-account-create-cl2rs" Oct 14 13:31:38 crc kubenswrapper[4725]: I1014 13:31:38.531442 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzvtm\" (UniqueName: \"kubernetes.io/projected/c2915d57-ca9a-4fff-ad1c-c51ae4f89775-kube-api-access-jzvtm\") pod \"placement-ae98-account-create-cl2rs\" (UID: \"c2915d57-ca9a-4fff-ad1c-c51ae4f89775\") " pod="openstack/placement-ae98-account-create-cl2rs" Oct 14 13:31:38 crc kubenswrapper[4725]: I1014 13:31:38.550549 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzvtm\" (UniqueName: \"kubernetes.io/projected/c2915d57-ca9a-4fff-ad1c-c51ae4f89775-kube-api-access-jzvtm\") pod \"placement-ae98-account-create-cl2rs\" (UID: \"c2915d57-ca9a-4fff-ad1c-c51ae4f89775\") " pod="openstack/placement-ae98-account-create-cl2rs" Oct 14 13:31:38 crc kubenswrapper[4725]: I1014 13:31:38.567190 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b115803-e57f-4651-8a38-9b1aece05cdf","Type":"ContainerStarted","Data":"843795fccd1a883bc21d16889d14ec1aab8790b3b7e288526473aae261f2eb99"} Oct 14 13:31:38 crc kubenswrapper[4725]: I1014 13:31:38.587809 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-335f-account-create-5ksxz"] Oct 14 13:31:38 crc kubenswrapper[4725]: I1014 13:31:38.592109 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-335f-account-create-5ksxz" Oct 14 13:31:38 crc kubenswrapper[4725]: I1014 13:31:38.595362 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Oct 14 13:31:38 crc kubenswrapper[4725]: I1014 13:31:38.600365 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-335f-account-create-5ksxz"] Oct 14 13:31:38 crc kubenswrapper[4725]: I1014 13:31:38.600764 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ae98-account-create-cl2rs" Oct 14 13:31:38 crc kubenswrapper[4725]: I1014 13:31:38.733257 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-9110-account-create-rdmx4"] Oct 14 13:31:38 crc kubenswrapper[4725]: I1014 13:31:38.735294 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzgqm\" (UniqueName: \"kubernetes.io/projected/14209ea5-288f-46d5-b9ee-860116dad16c-kube-api-access-xzgqm\") pod \"glance-335f-account-create-5ksxz\" (UID: \"14209ea5-288f-46d5-b9ee-860116dad16c\") " pod="openstack/glance-335f-account-create-5ksxz" Oct 14 13:31:38 crc kubenswrapper[4725]: I1014 13:31:38.835472 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ae98-account-create-cl2rs"] Oct 14 13:31:38 crc kubenswrapper[4725]: I1014 13:31:38.836297 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzgqm\" (UniqueName: \"kubernetes.io/projected/14209ea5-288f-46d5-b9ee-860116dad16c-kube-api-access-xzgqm\") pod \"glance-335f-account-create-5ksxz\" (UID: \"14209ea5-288f-46d5-b9ee-860116dad16c\") " pod="openstack/glance-335f-account-create-5ksxz" Oct 14 13:31:38 crc kubenswrapper[4725]: I1014 13:31:38.854427 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzgqm\" (UniqueName: \"kubernetes.io/projected/14209ea5-288f-46d5-b9ee-860116dad16c-kube-api-access-xzgqm\") pod \"glance-335f-account-create-5ksxz\" (UID: \"14209ea5-288f-46d5-b9ee-860116dad16c\") " pod="openstack/glance-335f-account-create-5ksxz" Oct 14 13:31:38 crc kubenswrapper[4725]: W1014 13:31:38.855973 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2915d57_ca9a_4fff_ad1c_c51ae4f89775.slice/crio-2d95105a4f66e0c70d857deb377ced9af5e698923ad75acfe9319d208a60f23f WatchSource:0}: Error finding container 2d95105a4f66e0c70d857deb377ced9af5e698923ad75acfe9319d208a60f23f: Status 404 returned error can't find the container with id 2d95105a4f66e0c70d857deb377ced9af5e698923ad75acfe9319d208a60f23f Oct 14 13:31:38 crc kubenswrapper[4725]: I1014 13:31:38.917090 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-335f-account-create-5ksxz" Oct 14 13:31:39 crc kubenswrapper[4725]: I1014 13:31:39.380242 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-335f-account-create-5ksxz"] Oct 14 13:31:39 crc kubenswrapper[4725]: W1014 13:31:39.442908 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14209ea5_288f_46d5_b9ee_860116dad16c.slice/crio-c4c38a7660d0d156d758c2305d28ddcb81386231aacb35541bff46e6caa724c7 WatchSource:0}: Error finding container c4c38a7660d0d156d758c2305d28ddcb81386231aacb35541bff46e6caa724c7: Status 404 returned error can't find the container with id c4c38a7660d0d156d758c2305d28ddcb81386231aacb35541bff46e6caa724c7 Oct 14 13:31:39 crc kubenswrapper[4725]: I1014 13:31:39.578241 4725 generic.go:334] "Generic (PLEG): container finished" podID="20827da8-286c-48f4-ae94-d6de62502d1f" containerID="15814c7e8be7cb0545643bad06b141691c05a600c9c858a24c57a8908122e3e8" exitCode=0 Oct 14 13:31:39 crc kubenswrapper[4725]: I1014 13:31:39.578339 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9110-account-create-rdmx4" event={"ID":"20827da8-286c-48f4-ae94-d6de62502d1f","Type":"ContainerDied","Data":"15814c7e8be7cb0545643bad06b141691c05a600c9c858a24c57a8908122e3e8"} Oct 14 13:31:39 crc kubenswrapper[4725]: I1014 13:31:39.579010 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9110-account-create-rdmx4" event={"ID":"20827da8-286c-48f4-ae94-d6de62502d1f","Type":"ContainerStarted","Data":"d03d91a043e68c9c498ff4f7bce0d4c6e1c66da170a305a5b6dd3d621a0f660c"} Oct 14 13:31:39 crc kubenswrapper[4725]: I1014 13:31:39.580919 4725 generic.go:334] "Generic (PLEG): container finished" podID="e690ed1d-b1fe-48b5-817c-d512cef45181" containerID="469d6c9dd8280483d8bdc01c4219e2d1f77837452b0087234379692a73aa3bd2" exitCode=0 Oct 14 13:31:39 crc kubenswrapper[4725]: I1014 13:31:39.581022 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e690ed1d-b1fe-48b5-817c-d512cef45181","Type":"ContainerDied","Data":"469d6c9dd8280483d8bdc01c4219e2d1f77837452b0087234379692a73aa3bd2"} Oct 14 13:31:39 crc kubenswrapper[4725]: I1014 13:31:39.583249 4725 generic.go:334] "Generic (PLEG): container finished" podID="c2915d57-ca9a-4fff-ad1c-c51ae4f89775" containerID="44a94dea2132f7abb9f12750d08d0e9c1a32705752da8ac9fcc661b68ab41be0" exitCode=0 Oct 14 13:31:39 crc kubenswrapper[4725]: I1014 13:31:39.583320 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ae98-account-create-cl2rs" event={"ID":"c2915d57-ca9a-4fff-ad1c-c51ae4f89775","Type":"ContainerDied","Data":"44a94dea2132f7abb9f12750d08d0e9c1a32705752da8ac9fcc661b68ab41be0"} Oct 14 13:31:39 crc kubenswrapper[4725]: I1014 13:31:39.583340 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ae98-account-create-cl2rs" event={"ID":"c2915d57-ca9a-4fff-ad1c-c51ae4f89775","Type":"ContainerStarted","Data":"2d95105a4f66e0c70d857deb377ced9af5e698923ad75acfe9319d208a60f23f"} Oct 14 13:31:39 crc kubenswrapper[4725]: I1014 13:31:39.589871 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-335f-account-create-5ksxz" event={"ID":"14209ea5-288f-46d5-b9ee-860116dad16c","Type":"ContainerStarted","Data":"c4c38a7660d0d156d758c2305d28ddcb81386231aacb35541bff46e6caa724c7"} Oct 14 13:31:39 crc kubenswrapper[4725]: I1014 13:31:39.592422 4725 generic.go:334] "Generic (PLEG): container finished" podID="cc3270b7-85d3-4e11-a5e1-d9e42f8b876c" containerID="9af8ac336b7385e92f1f7700ffd3de85aee088e2f9349df4545a5e644720a771" exitCode=0 Oct 14 13:31:39 crc kubenswrapper[4725]: I1014 13:31:39.592522 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c","Type":"ContainerDied","Data":"9af8ac336b7385e92f1f7700ffd3de85aee088e2f9349df4545a5e644720a771"} Oct 14 13:31:40 crc kubenswrapper[4725]: I1014 13:31:40.604111 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e690ed1d-b1fe-48b5-817c-d512cef45181","Type":"ContainerStarted","Data":"31b014df818ced151387b051e61349052027454c22b3d318be09ba0d92bf08c0"} Oct 14 13:31:40 crc kubenswrapper[4725]: I1014 13:31:40.605003 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:31:40 crc kubenswrapper[4725]: I1014 13:31:40.607885 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b115803-e57f-4651-8a38-9b1aece05cdf","Type":"ContainerStarted","Data":"9fe102d9efb893c869eb654bc2a0cd48abf8fe767ce6e96a9c669217b8f9deb2"} Oct 14 13:31:40 crc kubenswrapper[4725]: I1014 13:31:40.607927 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b115803-e57f-4651-8a38-9b1aece05cdf","Type":"ContainerStarted","Data":"3d3be67b5a2e497feb91be5b7ccbe787a26842802b9795da34b2b3634d3a21f2"} Oct 14 13:31:40 crc kubenswrapper[4725]: I1014 13:31:40.607942 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b115803-e57f-4651-8a38-9b1aece05cdf","Type":"ContainerStarted","Data":"60be1c59ce5e9ca2b1a6d5d523282c9ba2e98a39aadfd909d704e98d26d2b32f"} Oct 14 13:31:40 crc kubenswrapper[4725]: I1014 13:31:40.607950 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b115803-e57f-4651-8a38-9b1aece05cdf","Type":"ContainerStarted","Data":"a1ee66a2267c0f9acb08487eff5a91bcdf5046ebe993d35917e1611160f88479"} Oct 14 13:31:40 crc kubenswrapper[4725]: I1014 13:31:40.611137 4725 generic.go:334] "Generic (PLEG): container finished" podID="14209ea5-288f-46d5-b9ee-860116dad16c" containerID="c04cc397ec46ab6329e384403f2d4e33b46a326ee9e90b9cf68993007c3d5b1b" exitCode=0 Oct 14 13:31:40 crc kubenswrapper[4725]: I1014 13:31:40.611250 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-335f-account-create-5ksxz" event={"ID":"14209ea5-288f-46d5-b9ee-860116dad16c","Type":"ContainerDied","Data":"c04cc397ec46ab6329e384403f2d4e33b46a326ee9e90b9cf68993007c3d5b1b"} Oct 14 13:31:40 crc kubenswrapper[4725]: I1014 13:31:40.613538 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c","Type":"ContainerStarted","Data":"01a850d76feb2f99796249d8faf492b913822b629bba3a40210d1f02f75d0fff"} Oct 14 13:31:40 crc kubenswrapper[4725]: I1014 13:31:40.613865 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 14 13:31:40 crc kubenswrapper[4725]: I1014 13:31:40.637995 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371979.216797 podStartE2EDuration="57.637978647s" podCreationTimestamp="2025-10-14 13:30:43 +0000 UTC" firstStartedPulling="2025-10-14 13:30:45.863169189 +0000 UTC m=+962.711603998" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:31:40.636594119 +0000 UTC m=+1017.485028928" watchObservedRunningTime="2025-10-14 13:31:40.637978647 +0000 UTC m=+1017.486413466" Oct 14 13:31:40 crc kubenswrapper[4725]: I1014 13:31:40.664595 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.935245773 podStartE2EDuration="57.664578501s" podCreationTimestamp="2025-10-14 13:30:43 +0000 UTC" firstStartedPulling="2025-10-14 13:30:45.731526126 +0000 UTC m=+962.579960935" lastFinishedPulling="2025-10-14 13:31:05.460858854 +0000 UTC m=+982.309293663" observedRunningTime="2025-10-14 13:31:40.659962145 +0000 UTC m=+1017.508396944" watchObservedRunningTime="2025-10-14 13:31:40.664578501 +0000 UTC m=+1017.513013320" Oct 14 13:31:41 crc kubenswrapper[4725]: I1014 13:31:41.010520 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ae98-account-create-cl2rs" Oct 14 13:31:41 crc kubenswrapper[4725]: I1014 13:31:41.016240 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9110-account-create-rdmx4" Oct 14 13:31:41 crc kubenswrapper[4725]: I1014 13:31:41.184444 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lktd\" (UniqueName: \"kubernetes.io/projected/20827da8-286c-48f4-ae94-d6de62502d1f-kube-api-access-4lktd\") pod \"20827da8-286c-48f4-ae94-d6de62502d1f\" (UID: \"20827da8-286c-48f4-ae94-d6de62502d1f\") " Oct 14 13:31:41 crc kubenswrapper[4725]: I1014 13:31:41.184579 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzvtm\" (UniqueName: \"kubernetes.io/projected/c2915d57-ca9a-4fff-ad1c-c51ae4f89775-kube-api-access-jzvtm\") pod \"c2915d57-ca9a-4fff-ad1c-c51ae4f89775\" (UID: \"c2915d57-ca9a-4fff-ad1c-c51ae4f89775\") " Oct 14 13:31:41 crc kubenswrapper[4725]: I1014 13:31:41.209483 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2915d57-ca9a-4fff-ad1c-c51ae4f89775-kube-api-access-jzvtm" (OuterVolumeSpecName: "kube-api-access-jzvtm") pod "c2915d57-ca9a-4fff-ad1c-c51ae4f89775" (UID: "c2915d57-ca9a-4fff-ad1c-c51ae4f89775"). InnerVolumeSpecName "kube-api-access-jzvtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:31:41 crc kubenswrapper[4725]: I1014 13:31:41.209654 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20827da8-286c-48f4-ae94-d6de62502d1f-kube-api-access-4lktd" (OuterVolumeSpecName: "kube-api-access-4lktd") pod "20827da8-286c-48f4-ae94-d6de62502d1f" (UID: "20827da8-286c-48f4-ae94-d6de62502d1f"). InnerVolumeSpecName "kube-api-access-4lktd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:31:41 crc kubenswrapper[4725]: I1014 13:31:41.286830 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lktd\" (UniqueName: \"kubernetes.io/projected/20827da8-286c-48f4-ae94-d6de62502d1f-kube-api-access-4lktd\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:41 crc kubenswrapper[4725]: I1014 13:31:41.286866 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzvtm\" (UniqueName: \"kubernetes.io/projected/c2915d57-ca9a-4fff-ad1c-c51ae4f89775-kube-api-access-jzvtm\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:41 crc kubenswrapper[4725]: I1014 13:31:41.623810 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9110-account-create-rdmx4" event={"ID":"20827da8-286c-48f4-ae94-d6de62502d1f","Type":"ContainerDied","Data":"d03d91a043e68c9c498ff4f7bce0d4c6e1c66da170a305a5b6dd3d621a0f660c"} Oct 14 13:31:41 crc kubenswrapper[4725]: I1014 13:31:41.623843 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d03d91a043e68c9c498ff4f7bce0d4c6e1c66da170a305a5b6dd3d621a0f660c" Oct 14 13:31:41 crc kubenswrapper[4725]: I1014 13:31:41.623894 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9110-account-create-rdmx4" Oct 14 13:31:41 crc kubenswrapper[4725]: I1014 13:31:41.634789 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b115803-e57f-4651-8a38-9b1aece05cdf","Type":"ContainerStarted","Data":"5096db41852c65c00e7b0b3aacf72456f8ccb4a9c688fc1f69f6153fe320389c"} Oct 14 13:31:41 crc kubenswrapper[4725]: I1014 13:31:41.638512 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ae98-account-create-cl2rs" Oct 14 13:31:41 crc kubenswrapper[4725]: I1014 13:31:41.641592 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ae98-account-create-cl2rs" event={"ID":"c2915d57-ca9a-4fff-ad1c-c51ae4f89775","Type":"ContainerDied","Data":"2d95105a4f66e0c70d857deb377ced9af5e698923ad75acfe9319d208a60f23f"} Oct 14 13:31:41 crc kubenswrapper[4725]: I1014 13:31:41.641616 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d95105a4f66e0c70d857deb377ced9af5e698923ad75acfe9319d208a60f23f" Oct 14 13:31:41 crc kubenswrapper[4725]: I1014 13:31:41.927908 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-335f-account-create-5ksxz" Oct 14 13:31:42 crc kubenswrapper[4725]: I1014 13:31:42.104971 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzgqm\" (UniqueName: \"kubernetes.io/projected/14209ea5-288f-46d5-b9ee-860116dad16c-kube-api-access-xzgqm\") pod \"14209ea5-288f-46d5-b9ee-860116dad16c\" (UID: \"14209ea5-288f-46d5-b9ee-860116dad16c\") " Oct 14 13:31:42 crc kubenswrapper[4725]: I1014 13:31:42.110082 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14209ea5-288f-46d5-b9ee-860116dad16c-kube-api-access-xzgqm" (OuterVolumeSpecName: "kube-api-access-xzgqm") pod "14209ea5-288f-46d5-b9ee-860116dad16c" (UID: "14209ea5-288f-46d5-b9ee-860116dad16c"). InnerVolumeSpecName "kube-api-access-xzgqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:31:42 crc kubenswrapper[4725]: I1014 13:31:42.206912 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzgqm\" (UniqueName: \"kubernetes.io/projected/14209ea5-288f-46d5-b9ee-860116dad16c-kube-api-access-xzgqm\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:42 crc kubenswrapper[4725]: I1014 13:31:42.659919 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b115803-e57f-4651-8a38-9b1aece05cdf","Type":"ContainerStarted","Data":"6bd162a603c87edeaa0dac1d53bcb5a1ed3254b006057063d73d6bdd22f1bb85"} Oct 14 13:31:42 crc kubenswrapper[4725]: I1014 13:31:42.662417 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-335f-account-create-5ksxz" event={"ID":"14209ea5-288f-46d5-b9ee-860116dad16c","Type":"ContainerDied","Data":"c4c38a7660d0d156d758c2305d28ddcb81386231aacb35541bff46e6caa724c7"} Oct 14 13:31:42 crc kubenswrapper[4725]: I1014 13:31:42.662477 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4c38a7660d0d156d758c2305d28ddcb81386231aacb35541bff46e6caa724c7" Oct 14 13:31:42 crc kubenswrapper[4725]: I1014 13:31:42.662515 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-335f-account-create-5ksxz" Oct 14 13:31:43 crc kubenswrapper[4725]: I1014 13:31:43.653212 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-5j9rx"] Oct 14 13:31:43 crc kubenswrapper[4725]: E1014 13:31:43.653932 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2915d57-ca9a-4fff-ad1c-c51ae4f89775" containerName="mariadb-account-create" Oct 14 13:31:43 crc kubenswrapper[4725]: I1014 13:31:43.653953 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2915d57-ca9a-4fff-ad1c-c51ae4f89775" containerName="mariadb-account-create" Oct 14 13:31:43 crc kubenswrapper[4725]: E1014 13:31:43.653973 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20827da8-286c-48f4-ae94-d6de62502d1f" containerName="mariadb-account-create" Oct 14 13:31:43 crc kubenswrapper[4725]: I1014 13:31:43.653981 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="20827da8-286c-48f4-ae94-d6de62502d1f" containerName="mariadb-account-create" Oct 14 13:31:43 crc kubenswrapper[4725]: E1014 13:31:43.653997 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14209ea5-288f-46d5-b9ee-860116dad16c" containerName="mariadb-account-create" Oct 14 13:31:43 crc kubenswrapper[4725]: I1014 13:31:43.654006 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="14209ea5-288f-46d5-b9ee-860116dad16c" containerName="mariadb-account-create" Oct 14 13:31:43 crc kubenswrapper[4725]: I1014 13:31:43.654238 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="14209ea5-288f-46d5-b9ee-860116dad16c" containerName="mariadb-account-create" Oct 14 13:31:43 crc kubenswrapper[4725]: I1014 13:31:43.654269 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="20827da8-286c-48f4-ae94-d6de62502d1f" containerName="mariadb-account-create" Oct 14 13:31:43 crc kubenswrapper[4725]: I1014 13:31:43.654290 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2915d57-ca9a-4fff-ad1c-c51ae4f89775" containerName="mariadb-account-create" Oct 14 13:31:43 crc kubenswrapper[4725]: I1014 13:31:43.655021 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5j9rx" Oct 14 13:31:43 crc kubenswrapper[4725]: I1014 13:31:43.656872 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Oct 14 13:31:43 crc kubenswrapper[4725]: I1014 13:31:43.659752 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-p8cwt" Oct 14 13:31:43 crc kubenswrapper[4725]: I1014 13:31:43.666805 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-5j9rx"] Oct 14 13:31:43 crc kubenswrapper[4725]: I1014 13:31:43.835993 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn7rz\" (UniqueName: \"kubernetes.io/projected/b32894cc-6bf3-46d4-981c-be6040373b59-kube-api-access-mn7rz\") pod \"glance-db-sync-5j9rx\" (UID: \"b32894cc-6bf3-46d4-981c-be6040373b59\") " pod="openstack/glance-db-sync-5j9rx" Oct 14 13:31:43 crc kubenswrapper[4725]: I1014 13:31:43.836051 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b32894cc-6bf3-46d4-981c-be6040373b59-db-sync-config-data\") pod \"glance-db-sync-5j9rx\" (UID: \"b32894cc-6bf3-46d4-981c-be6040373b59\") " pod="openstack/glance-db-sync-5j9rx" Oct 14 13:31:43 crc kubenswrapper[4725]: I1014 13:31:43.836079 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b32894cc-6bf3-46d4-981c-be6040373b59-combined-ca-bundle\") pod \"glance-db-sync-5j9rx\" (UID: \"b32894cc-6bf3-46d4-981c-be6040373b59\") " pod="openstack/glance-db-sync-5j9rx" Oct 14 13:31:43 crc kubenswrapper[4725]: I1014 13:31:43.836336 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b32894cc-6bf3-46d4-981c-be6040373b59-config-data\") pod \"glance-db-sync-5j9rx\" (UID: \"b32894cc-6bf3-46d4-981c-be6040373b59\") " pod="openstack/glance-db-sync-5j9rx" Oct 14 13:31:43 crc kubenswrapper[4725]: I1014 13:31:43.937742 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn7rz\" (UniqueName: \"kubernetes.io/projected/b32894cc-6bf3-46d4-981c-be6040373b59-kube-api-access-mn7rz\") pod \"glance-db-sync-5j9rx\" (UID: \"b32894cc-6bf3-46d4-981c-be6040373b59\") " pod="openstack/glance-db-sync-5j9rx" Oct 14 13:31:43 crc kubenswrapper[4725]: I1014 13:31:43.937802 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b32894cc-6bf3-46d4-981c-be6040373b59-db-sync-config-data\") pod \"glance-db-sync-5j9rx\" (UID: \"b32894cc-6bf3-46d4-981c-be6040373b59\") " pod="openstack/glance-db-sync-5j9rx" Oct 14 13:31:43 crc kubenswrapper[4725]: I1014 13:31:43.937840 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b32894cc-6bf3-46d4-981c-be6040373b59-combined-ca-bundle\") pod \"glance-db-sync-5j9rx\" (UID: \"b32894cc-6bf3-46d4-981c-be6040373b59\") " pod="openstack/glance-db-sync-5j9rx" Oct 14 13:31:43 crc kubenswrapper[4725]: I1014 13:31:43.937898 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b32894cc-6bf3-46d4-981c-be6040373b59-config-data\") pod \"glance-db-sync-5j9rx\" (UID: \"b32894cc-6bf3-46d4-981c-be6040373b59\") " pod="openstack/glance-db-sync-5j9rx" Oct 14 13:31:43 crc kubenswrapper[4725]: I1014 13:31:43.941744 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b32894cc-6bf3-46d4-981c-be6040373b59-combined-ca-bundle\") pod \"glance-db-sync-5j9rx\" (UID: \"b32894cc-6bf3-46d4-981c-be6040373b59\") " pod="openstack/glance-db-sync-5j9rx" Oct 14 13:31:43 crc kubenswrapper[4725]: I1014 13:31:43.941901 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Oct 14 13:31:43 crc kubenswrapper[4725]: I1014 13:31:43.953073 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b32894cc-6bf3-46d4-981c-be6040373b59-db-sync-config-data\") pod \"glance-db-sync-5j9rx\" (UID: \"b32894cc-6bf3-46d4-981c-be6040373b59\") " pod="openstack/glance-db-sync-5j9rx" Oct 14 13:31:43 crc kubenswrapper[4725]: I1014 13:31:43.955634 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b32894cc-6bf3-46d4-981c-be6040373b59-config-data\") pod \"glance-db-sync-5j9rx\" (UID: \"b32894cc-6bf3-46d4-981c-be6040373b59\") " pod="openstack/glance-db-sync-5j9rx" Oct 14 13:31:43 crc kubenswrapper[4725]: I1014 13:31:43.957390 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn7rz\" (UniqueName: \"kubernetes.io/projected/b32894cc-6bf3-46d4-981c-be6040373b59-kube-api-access-mn7rz\") pod \"glance-db-sync-5j9rx\" (UID: \"b32894cc-6bf3-46d4-981c-be6040373b59\") " pod="openstack/glance-db-sync-5j9rx" Oct 14 13:31:43 crc kubenswrapper[4725]: I1014 13:31:43.985430 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-p8cwt" Oct 14 13:31:43 crc kubenswrapper[4725]: I1014 13:31:43.991607 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5j9rx" Oct 14 13:31:44 crc kubenswrapper[4725]: I1014 13:31:44.319982 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-5j9rx"] Oct 14 13:31:44 crc kubenswrapper[4725]: I1014 13:31:44.680645 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b115803-e57f-4651-8a38-9b1aece05cdf","Type":"ContainerStarted","Data":"8c6b9309cce1d4079a88e68398eb2d9ab51dc5891aeeec67a68581d9f3e550b1"} Oct 14 13:31:44 crc kubenswrapper[4725]: I1014 13:31:44.681483 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5j9rx" event={"ID":"b32894cc-6bf3-46d4-981c-be6040373b59","Type":"ContainerStarted","Data":"c546ca8f2e130132cbb2a7a83e8bfb42bc7e5a5529f4943646b1b4fa11524d5d"} Oct 14 13:31:44 crc kubenswrapper[4725]: I1014 13:31:44.817875 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-x2x2g" podUID="19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c" containerName="ovn-controller" probeResult="failure" output=< Oct 14 13:31:44 crc kubenswrapper[4725]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 14 13:31:44 crc kubenswrapper[4725]: > Oct 14 13:31:44 crc kubenswrapper[4725]: I1014 13:31:44.887098 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-mbwcn" Oct 14 13:31:44 crc kubenswrapper[4725]: I1014 13:31:44.895722 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-mbwcn" Oct 14 13:31:45 crc kubenswrapper[4725]: I1014 13:31:45.152253 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-x2x2g-config-nb6h4"] Oct 14 13:31:45 crc kubenswrapper[4725]: I1014 13:31:45.154102 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x2x2g-config-nb6h4" Oct 14 13:31:45 crc kubenswrapper[4725]: I1014 13:31:45.157079 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 14 13:31:45 crc kubenswrapper[4725]: I1014 13:31:45.162233 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-x2x2g-config-nb6h4"] Oct 14 13:31:45 crc kubenswrapper[4725]: I1014 13:31:45.259978 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/47371e7b-d7ce-42d0-af6b-22af9adec8ed-var-log-ovn\") pod \"ovn-controller-x2x2g-config-nb6h4\" (UID: \"47371e7b-d7ce-42d0-af6b-22af9adec8ed\") " pod="openstack/ovn-controller-x2x2g-config-nb6h4" Oct 14 13:31:45 crc kubenswrapper[4725]: I1014 13:31:45.260040 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/47371e7b-d7ce-42d0-af6b-22af9adec8ed-var-run\") pod \"ovn-controller-x2x2g-config-nb6h4\" (UID: \"47371e7b-d7ce-42d0-af6b-22af9adec8ed\") " pod="openstack/ovn-controller-x2x2g-config-nb6h4" Oct 14 13:31:45 crc kubenswrapper[4725]: I1014 13:31:45.260074 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/47371e7b-d7ce-42d0-af6b-22af9adec8ed-additional-scripts\") pod \"ovn-controller-x2x2g-config-nb6h4\" (UID: \"47371e7b-d7ce-42d0-af6b-22af9adec8ed\") " pod="openstack/ovn-controller-x2x2g-config-nb6h4" Oct 14 13:31:45 crc kubenswrapper[4725]: I1014 13:31:45.260259 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47371e7b-d7ce-42d0-af6b-22af9adec8ed-scripts\") pod \"ovn-controller-x2x2g-config-nb6h4\" (UID: \"47371e7b-d7ce-42d0-af6b-22af9adec8ed\") " pod="openstack/ovn-controller-x2x2g-config-nb6h4" Oct 14 13:31:45 crc kubenswrapper[4725]: I1014 13:31:45.260327 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2vwk\" (UniqueName: \"kubernetes.io/projected/47371e7b-d7ce-42d0-af6b-22af9adec8ed-kube-api-access-x2vwk\") pod \"ovn-controller-x2x2g-config-nb6h4\" (UID: \"47371e7b-d7ce-42d0-af6b-22af9adec8ed\") " pod="openstack/ovn-controller-x2x2g-config-nb6h4" Oct 14 13:31:45 crc kubenswrapper[4725]: I1014 13:31:45.260369 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/47371e7b-d7ce-42d0-af6b-22af9adec8ed-var-run-ovn\") pod \"ovn-controller-x2x2g-config-nb6h4\" (UID: \"47371e7b-d7ce-42d0-af6b-22af9adec8ed\") " pod="openstack/ovn-controller-x2x2g-config-nb6h4" Oct 14 13:31:45 crc kubenswrapper[4725]: I1014 13:31:45.362527 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47371e7b-d7ce-42d0-af6b-22af9adec8ed-scripts\") pod \"ovn-controller-x2x2g-config-nb6h4\" (UID: \"47371e7b-d7ce-42d0-af6b-22af9adec8ed\") " pod="openstack/ovn-controller-x2x2g-config-nb6h4" Oct 14 13:31:45 crc kubenswrapper[4725]: I1014 13:31:45.362858 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2vwk\" (UniqueName: \"kubernetes.io/projected/47371e7b-d7ce-42d0-af6b-22af9adec8ed-kube-api-access-x2vwk\") pod \"ovn-controller-x2x2g-config-nb6h4\" (UID: \"47371e7b-d7ce-42d0-af6b-22af9adec8ed\") " pod="openstack/ovn-controller-x2x2g-config-nb6h4" Oct 14 13:31:45 crc kubenswrapper[4725]: I1014 13:31:45.362897 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/47371e7b-d7ce-42d0-af6b-22af9adec8ed-var-run-ovn\") pod \"ovn-controller-x2x2g-config-nb6h4\" (UID: \"47371e7b-d7ce-42d0-af6b-22af9adec8ed\") " pod="openstack/ovn-controller-x2x2g-config-nb6h4" Oct 14 13:31:45 crc kubenswrapper[4725]: I1014 13:31:45.362996 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/47371e7b-d7ce-42d0-af6b-22af9adec8ed-var-log-ovn\") pod \"ovn-controller-x2x2g-config-nb6h4\" (UID: \"47371e7b-d7ce-42d0-af6b-22af9adec8ed\") " pod="openstack/ovn-controller-x2x2g-config-nb6h4" Oct 14 13:31:45 crc kubenswrapper[4725]: I1014 13:31:45.363043 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/47371e7b-d7ce-42d0-af6b-22af9adec8ed-var-run\") pod \"ovn-controller-x2x2g-config-nb6h4\" (UID: \"47371e7b-d7ce-42d0-af6b-22af9adec8ed\") " pod="openstack/ovn-controller-x2x2g-config-nb6h4" Oct 14 13:31:45 crc kubenswrapper[4725]: I1014 13:31:45.363072 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/47371e7b-d7ce-42d0-af6b-22af9adec8ed-additional-scripts\") pod \"ovn-controller-x2x2g-config-nb6h4\" (UID: \"47371e7b-d7ce-42d0-af6b-22af9adec8ed\") " pod="openstack/ovn-controller-x2x2g-config-nb6h4" Oct 14 13:31:45 crc kubenswrapper[4725]: I1014 13:31:45.363465 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/47371e7b-d7ce-42d0-af6b-22af9adec8ed-var-log-ovn\") pod \"ovn-controller-x2x2g-config-nb6h4\" (UID: \"47371e7b-d7ce-42d0-af6b-22af9adec8ed\") " pod="openstack/ovn-controller-x2x2g-config-nb6h4" Oct 14 13:31:45 crc kubenswrapper[4725]: I1014 13:31:45.363542 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/47371e7b-d7ce-42d0-af6b-22af9adec8ed-var-run-ovn\") pod \"ovn-controller-x2x2g-config-nb6h4\" (UID: \"47371e7b-d7ce-42d0-af6b-22af9adec8ed\") " pod="openstack/ovn-controller-x2x2g-config-nb6h4" Oct 14 13:31:45 crc kubenswrapper[4725]: I1014 13:31:45.363589 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/47371e7b-d7ce-42d0-af6b-22af9adec8ed-var-run\") pod \"ovn-controller-x2x2g-config-nb6h4\" (UID: \"47371e7b-d7ce-42d0-af6b-22af9adec8ed\") " pod="openstack/ovn-controller-x2x2g-config-nb6h4" Oct 14 13:31:45 crc kubenswrapper[4725]: I1014 13:31:45.363724 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/47371e7b-d7ce-42d0-af6b-22af9adec8ed-additional-scripts\") pod \"ovn-controller-x2x2g-config-nb6h4\" (UID: \"47371e7b-d7ce-42d0-af6b-22af9adec8ed\") " pod="openstack/ovn-controller-x2x2g-config-nb6h4" Oct 14 13:31:45 crc kubenswrapper[4725]: I1014 13:31:45.364897 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47371e7b-d7ce-42d0-af6b-22af9adec8ed-scripts\") pod \"ovn-controller-x2x2g-config-nb6h4\" (UID: \"47371e7b-d7ce-42d0-af6b-22af9adec8ed\") " pod="openstack/ovn-controller-x2x2g-config-nb6h4" Oct 14 13:31:45 crc kubenswrapper[4725]: I1014 13:31:45.384422 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2vwk\" (UniqueName: \"kubernetes.io/projected/47371e7b-d7ce-42d0-af6b-22af9adec8ed-kube-api-access-x2vwk\") pod \"ovn-controller-x2x2g-config-nb6h4\" (UID: \"47371e7b-d7ce-42d0-af6b-22af9adec8ed\") " pod="openstack/ovn-controller-x2x2g-config-nb6h4" Oct 14 13:31:45 crc kubenswrapper[4725]: I1014 13:31:45.524735 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x2x2g-config-nb6h4" Oct 14 13:31:45 crc kubenswrapper[4725]: I1014 13:31:45.695733 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b115803-e57f-4651-8a38-9b1aece05cdf","Type":"ContainerStarted","Data":"4cb59c5e8921d7dfcbc5bb516a1770437286fd1b1969f864a08d5416baadf5e2"} Oct 14 13:31:45 crc kubenswrapper[4725]: I1014 13:31:45.994036 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-x2x2g-config-nb6h4"] Oct 14 13:31:46 crc kubenswrapper[4725]: I1014 13:31:46.708921 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x2x2g-config-nb6h4" event={"ID":"47371e7b-d7ce-42d0-af6b-22af9adec8ed","Type":"ContainerStarted","Data":"0cd106722729bf4661ebb248e7c37a7f4c17d35033d56e8bcbd1a0d93589649a"} Oct 14 13:31:46 crc kubenswrapper[4725]: I1014 13:31:46.709215 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x2x2g-config-nb6h4" event={"ID":"47371e7b-d7ce-42d0-af6b-22af9adec8ed","Type":"ContainerStarted","Data":"09d6aa6191c617cbd4938c4e844f0ec5d2582097d8afd8c1d58d14f3c6b778f9"} Oct 14 13:31:46 crc kubenswrapper[4725]: I1014 13:31:46.716691 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b115803-e57f-4651-8a38-9b1aece05cdf","Type":"ContainerStarted","Data":"b74f95b84b8661f618ac4663e55f85233f04259d63fa65c5dd399633d7a60cd0"} Oct 14 13:31:46 crc kubenswrapper[4725]: I1014 13:31:46.716728 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b115803-e57f-4651-8a38-9b1aece05cdf","Type":"ContainerStarted","Data":"a27e10e10dab6bcc749ea190dc33def7332bac5589600444728822392ee9f99e"} Oct 14 13:31:46 crc kubenswrapper[4725]: I1014 13:31:46.731245 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-x2x2g-config-nb6h4" podStartSLOduration=1.731225483 podStartE2EDuration="1.731225483s" podCreationTimestamp="2025-10-14 13:31:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:31:46.724759817 +0000 UTC m=+1023.573194616" watchObservedRunningTime="2025-10-14 13:31:46.731225483 +0000 UTC m=+1023.579660292" Oct 14 13:31:47 crc kubenswrapper[4725]: I1014 13:31:47.725133 4725 generic.go:334] "Generic (PLEG): container finished" podID="47371e7b-d7ce-42d0-af6b-22af9adec8ed" containerID="0cd106722729bf4661ebb248e7c37a7f4c17d35033d56e8bcbd1a0d93589649a" exitCode=0 Oct 14 13:31:47 crc kubenswrapper[4725]: I1014 13:31:47.725206 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x2x2g-config-nb6h4" event={"ID":"47371e7b-d7ce-42d0-af6b-22af9adec8ed","Type":"ContainerDied","Data":"0cd106722729bf4661ebb248e7c37a7f4c17d35033d56e8bcbd1a0d93589649a"} Oct 14 13:31:49 crc kubenswrapper[4725]: I1014 13:31:49.338515 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x2x2g-config-nb6h4" Oct 14 13:31:49 crc kubenswrapper[4725]: I1014 13:31:49.424241 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/47371e7b-d7ce-42d0-af6b-22af9adec8ed-additional-scripts\") pod \"47371e7b-d7ce-42d0-af6b-22af9adec8ed\" (UID: \"47371e7b-d7ce-42d0-af6b-22af9adec8ed\") " Oct 14 13:31:49 crc kubenswrapper[4725]: I1014 13:31:49.424594 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/47371e7b-d7ce-42d0-af6b-22af9adec8ed-var-log-ovn\") pod \"47371e7b-d7ce-42d0-af6b-22af9adec8ed\" (UID: \"47371e7b-d7ce-42d0-af6b-22af9adec8ed\") " Oct 14 13:31:49 crc kubenswrapper[4725]: I1014 13:31:49.424613 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/47371e7b-d7ce-42d0-af6b-22af9adec8ed-var-run-ovn\") pod \"47371e7b-d7ce-42d0-af6b-22af9adec8ed\" (UID: \"47371e7b-d7ce-42d0-af6b-22af9adec8ed\") " Oct 14 13:31:49 crc kubenswrapper[4725]: I1014 13:31:49.424677 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/47371e7b-d7ce-42d0-af6b-22af9adec8ed-var-run\") pod \"47371e7b-d7ce-42d0-af6b-22af9adec8ed\" (UID: \"47371e7b-d7ce-42d0-af6b-22af9adec8ed\") " Oct 14 13:31:49 crc kubenswrapper[4725]: I1014 13:31:49.424724 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2vwk\" (UniqueName: \"kubernetes.io/projected/47371e7b-d7ce-42d0-af6b-22af9adec8ed-kube-api-access-x2vwk\") pod \"47371e7b-d7ce-42d0-af6b-22af9adec8ed\" (UID: \"47371e7b-d7ce-42d0-af6b-22af9adec8ed\") " Oct 14 13:31:49 crc kubenswrapper[4725]: I1014 13:31:49.424728 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47371e7b-d7ce-42d0-af6b-22af9adec8ed-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "47371e7b-d7ce-42d0-af6b-22af9adec8ed" (UID: "47371e7b-d7ce-42d0-af6b-22af9adec8ed"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:31:49 crc kubenswrapper[4725]: I1014 13:31:49.424740 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47371e7b-d7ce-42d0-af6b-22af9adec8ed-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "47371e7b-d7ce-42d0-af6b-22af9adec8ed" (UID: "47371e7b-d7ce-42d0-af6b-22af9adec8ed"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:31:49 crc kubenswrapper[4725]: I1014 13:31:49.424791 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47371e7b-d7ce-42d0-af6b-22af9adec8ed-scripts\") pod \"47371e7b-d7ce-42d0-af6b-22af9adec8ed\" (UID: \"47371e7b-d7ce-42d0-af6b-22af9adec8ed\") " Oct 14 13:31:49 crc kubenswrapper[4725]: I1014 13:31:49.424814 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47371e7b-d7ce-42d0-af6b-22af9adec8ed-var-run" (OuterVolumeSpecName: "var-run") pod "47371e7b-d7ce-42d0-af6b-22af9adec8ed" (UID: "47371e7b-d7ce-42d0-af6b-22af9adec8ed"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:31:49 crc kubenswrapper[4725]: I1014 13:31:49.425184 4725 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/47371e7b-d7ce-42d0-af6b-22af9adec8ed-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:49 crc kubenswrapper[4725]: I1014 13:31:49.425201 4725 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/47371e7b-d7ce-42d0-af6b-22af9adec8ed-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:49 crc kubenswrapper[4725]: I1014 13:31:49.425211 4725 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/47371e7b-d7ce-42d0-af6b-22af9adec8ed-var-run\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:49 crc kubenswrapper[4725]: I1014 13:31:49.425573 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47371e7b-d7ce-42d0-af6b-22af9adec8ed-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "47371e7b-d7ce-42d0-af6b-22af9adec8ed" (UID: "47371e7b-d7ce-42d0-af6b-22af9adec8ed"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:31:49 crc kubenswrapper[4725]: I1014 13:31:49.425726 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47371e7b-d7ce-42d0-af6b-22af9adec8ed-scripts" (OuterVolumeSpecName: "scripts") pod "47371e7b-d7ce-42d0-af6b-22af9adec8ed" (UID: "47371e7b-d7ce-42d0-af6b-22af9adec8ed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:31:49 crc kubenswrapper[4725]: I1014 13:31:49.431675 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47371e7b-d7ce-42d0-af6b-22af9adec8ed-kube-api-access-x2vwk" (OuterVolumeSpecName: "kube-api-access-x2vwk") pod "47371e7b-d7ce-42d0-af6b-22af9adec8ed" (UID: "47371e7b-d7ce-42d0-af6b-22af9adec8ed"). InnerVolumeSpecName "kube-api-access-x2vwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:31:49 crc kubenswrapper[4725]: I1014 13:31:49.527418 4725 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/47371e7b-d7ce-42d0-af6b-22af9adec8ed-additional-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:49 crc kubenswrapper[4725]: I1014 13:31:49.527492 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2vwk\" (UniqueName: \"kubernetes.io/projected/47371e7b-d7ce-42d0-af6b-22af9adec8ed-kube-api-access-x2vwk\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:49 crc kubenswrapper[4725]: I1014 13:31:49.527514 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47371e7b-d7ce-42d0-af6b-22af9adec8ed-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:49 crc kubenswrapper[4725]: I1014 13:31:49.763415 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b115803-e57f-4651-8a38-9b1aece05cdf","Type":"ContainerStarted","Data":"6c220a840a8f1954c3f4055a2a571e8564dbd84ccd1da65c60177f301e25302b"} Oct 14 13:31:49 crc kubenswrapper[4725]: I1014 13:31:49.767664 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-x2x2g-config-nb6h4" event={"ID":"47371e7b-d7ce-42d0-af6b-22af9adec8ed","Type":"ContainerDied","Data":"09d6aa6191c617cbd4938c4e844f0ec5d2582097d8afd8c1d58d14f3c6b778f9"} Oct 14 13:31:49 crc kubenswrapper[4725]: I1014 13:31:49.767700 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09d6aa6191c617cbd4938c4e844f0ec5d2582097d8afd8c1d58d14f3c6b778f9" Oct 14 13:31:49 crc kubenswrapper[4725]: I1014 13:31:49.767750 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-x2x2g-config-nb6h4" Oct 14 13:31:49 crc kubenswrapper[4725]: I1014 13:31:49.843983 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-x2x2g-config-nb6h4"] Oct 14 13:31:49 crc kubenswrapper[4725]: I1014 13:31:49.846709 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-x2x2g" Oct 14 13:31:49 crc kubenswrapper[4725]: I1014 13:31:49.853866 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-x2x2g-config-nb6h4"] Oct 14 13:31:49 crc kubenswrapper[4725]: I1014 13:31:49.929934 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47371e7b-d7ce-42d0-af6b-22af9adec8ed" path="/var/lib/kubelet/pods/47371e7b-d7ce-42d0-af6b-22af9adec8ed/volumes" Oct 14 13:31:55 crc kubenswrapper[4725]: I1014 13:31:55.201589 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 14 13:31:55 crc kubenswrapper[4725]: I1014 13:31:55.291821 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:31:55 crc kubenswrapper[4725]: I1014 13:31:55.541765 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-lspzt"] Oct 14 13:31:55 crc kubenswrapper[4725]: E1014 13:31:55.542125 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47371e7b-d7ce-42d0-af6b-22af9adec8ed" containerName="ovn-config" Oct 14 13:31:55 crc kubenswrapper[4725]: I1014 13:31:55.542143 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="47371e7b-d7ce-42d0-af6b-22af9adec8ed" containerName="ovn-config" Oct 14 13:31:55 crc kubenswrapper[4725]: I1014 13:31:55.542323 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="47371e7b-d7ce-42d0-af6b-22af9adec8ed" containerName="ovn-config" Oct 14 13:31:55 crc kubenswrapper[4725]: I1014 13:31:55.542985 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-lspzt" Oct 14 13:31:55 crc kubenswrapper[4725]: I1014 13:31:55.556361 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-lspzt"] Oct 14 13:31:55 crc kubenswrapper[4725]: I1014 13:31:55.627351 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrvjt\" (UniqueName: \"kubernetes.io/projected/2a3f32f8-bd1d-47c8-ab85-6ca3d5e571fe-kube-api-access-hrvjt\") pod \"cinder-db-create-lspzt\" (UID: \"2a3f32f8-bd1d-47c8-ab85-6ca3d5e571fe\") " pod="openstack/cinder-db-create-lspzt" Oct 14 13:31:55 crc kubenswrapper[4725]: I1014 13:31:55.638796 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-xbh4s"] Oct 14 13:31:55 crc kubenswrapper[4725]: I1014 13:31:55.639803 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xbh4s" Oct 14 13:31:55 crc kubenswrapper[4725]: I1014 13:31:55.657413 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-xbh4s"] Oct 14 13:31:55 crc kubenswrapper[4725]: I1014 13:31:55.728735 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzdcs\" (UniqueName: \"kubernetes.io/projected/9c91e6d9-e590-4467-81db-d9a571375693-kube-api-access-hzdcs\") pod \"barbican-db-create-xbh4s\" (UID: \"9c91e6d9-e590-4467-81db-d9a571375693\") " pod="openstack/barbican-db-create-xbh4s" Oct 14 13:31:55 crc kubenswrapper[4725]: I1014 13:31:55.728821 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrvjt\" (UniqueName: \"kubernetes.io/projected/2a3f32f8-bd1d-47c8-ab85-6ca3d5e571fe-kube-api-access-hrvjt\") pod \"cinder-db-create-lspzt\" (UID: \"2a3f32f8-bd1d-47c8-ab85-6ca3d5e571fe\") " pod="openstack/cinder-db-create-lspzt" Oct 14 13:31:55 crc kubenswrapper[4725]: I1014 13:31:55.746959 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrvjt\" (UniqueName: \"kubernetes.io/projected/2a3f32f8-bd1d-47c8-ab85-6ca3d5e571fe-kube-api-access-hrvjt\") pod \"cinder-db-create-lspzt\" (UID: \"2a3f32f8-bd1d-47c8-ab85-6ca3d5e571fe\") " pod="openstack/cinder-db-create-lspzt" Oct 14 13:31:55 crc kubenswrapper[4725]: I1014 13:31:55.803398 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-x95f9"] Oct 14 13:31:55 crc kubenswrapper[4725]: I1014 13:31:55.809790 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x95f9" Oct 14 13:31:55 crc kubenswrapper[4725]: I1014 13:31:55.816372 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 14 13:31:55 crc kubenswrapper[4725]: I1014 13:31:55.816382 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 14 13:31:55 crc kubenswrapper[4725]: I1014 13:31:55.816903 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 14 13:31:55 crc kubenswrapper[4725]: I1014 13:31:55.818104 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-h7l4t" Oct 14 13:31:55 crc kubenswrapper[4725]: I1014 13:31:55.820139 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-x95f9"] Oct 14 13:31:55 crc kubenswrapper[4725]: I1014 13:31:55.833815 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzdcs\" (UniqueName: \"kubernetes.io/projected/9c91e6d9-e590-4467-81db-d9a571375693-kube-api-access-hzdcs\") pod \"barbican-db-create-xbh4s\" (UID: \"9c91e6d9-e590-4467-81db-d9a571375693\") " pod="openstack/barbican-db-create-xbh4s" Oct 14 13:31:55 crc kubenswrapper[4725]: I1014 13:31:55.859299 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-lspzt" Oct 14 13:31:55 crc kubenswrapper[4725]: I1014 13:31:55.860360 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzdcs\" (UniqueName: \"kubernetes.io/projected/9c91e6d9-e590-4467-81db-d9a571375693-kube-api-access-hzdcs\") pod \"barbican-db-create-xbh4s\" (UID: \"9c91e6d9-e590-4467-81db-d9a571375693\") " pod="openstack/barbican-db-create-xbh4s" Oct 14 13:31:55 crc kubenswrapper[4725]: I1014 13:31:55.867836 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-2p9qc"] Oct 14 13:31:55 crc kubenswrapper[4725]: I1014 13:31:55.868946 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2p9qc" Oct 14 13:31:55 crc kubenswrapper[4725]: I1014 13:31:55.878808 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-2p9qc"] Oct 14 13:31:55 crc kubenswrapper[4725]: I1014 13:31:55.935032 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35b7e347-c4b6-450d-88c1-f352d4301fed-combined-ca-bundle\") pod \"keystone-db-sync-x95f9\" (UID: \"35b7e347-c4b6-450d-88c1-f352d4301fed\") " pod="openstack/keystone-db-sync-x95f9" Oct 14 13:31:55 crc kubenswrapper[4725]: I1014 13:31:55.935072 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35b7e347-c4b6-450d-88c1-f352d4301fed-config-data\") pod \"keystone-db-sync-x95f9\" (UID: \"35b7e347-c4b6-450d-88c1-f352d4301fed\") " pod="openstack/keystone-db-sync-x95f9" Oct 14 13:31:55 crc kubenswrapper[4725]: I1014 13:31:55.935150 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28752\" (UniqueName: \"kubernetes.io/projected/a34964eb-f117-4f77-a5f8-bbc22ae15966-kube-api-access-28752\") pod \"neutron-db-create-2p9qc\" (UID: \"a34964eb-f117-4f77-a5f8-bbc22ae15966\") " pod="openstack/neutron-db-create-2p9qc" Oct 14 13:31:55 crc kubenswrapper[4725]: I1014 13:31:55.935193 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlczh\" (UniqueName: \"kubernetes.io/projected/35b7e347-c4b6-450d-88c1-f352d4301fed-kube-api-access-nlczh\") pod \"keystone-db-sync-x95f9\" (UID: \"35b7e347-c4b6-450d-88c1-f352d4301fed\") " pod="openstack/keystone-db-sync-x95f9" Oct 14 13:31:55 crc kubenswrapper[4725]: I1014 13:31:55.956802 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xbh4s" Oct 14 13:31:56 crc kubenswrapper[4725]: I1014 13:31:56.036877 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28752\" (UniqueName: \"kubernetes.io/projected/a34964eb-f117-4f77-a5f8-bbc22ae15966-kube-api-access-28752\") pod \"neutron-db-create-2p9qc\" (UID: \"a34964eb-f117-4f77-a5f8-bbc22ae15966\") " pod="openstack/neutron-db-create-2p9qc" Oct 14 13:31:56 crc kubenswrapper[4725]: I1014 13:31:56.036947 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlczh\" (UniqueName: \"kubernetes.io/projected/35b7e347-c4b6-450d-88c1-f352d4301fed-kube-api-access-nlczh\") pod \"keystone-db-sync-x95f9\" (UID: \"35b7e347-c4b6-450d-88c1-f352d4301fed\") " pod="openstack/keystone-db-sync-x95f9" Oct 14 13:31:56 crc kubenswrapper[4725]: I1014 13:31:56.037031 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35b7e347-c4b6-450d-88c1-f352d4301fed-combined-ca-bundle\") pod \"keystone-db-sync-x95f9\" (UID: \"35b7e347-c4b6-450d-88c1-f352d4301fed\") " pod="openstack/keystone-db-sync-x95f9" Oct 14 13:31:56 crc kubenswrapper[4725]: I1014 13:31:56.037057 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35b7e347-c4b6-450d-88c1-f352d4301fed-config-data\") pod \"keystone-db-sync-x95f9\" (UID: \"35b7e347-c4b6-450d-88c1-f352d4301fed\") " pod="openstack/keystone-db-sync-x95f9" Oct 14 13:31:56 crc kubenswrapper[4725]: I1014 13:31:56.040760 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35b7e347-c4b6-450d-88c1-f352d4301fed-config-data\") pod \"keystone-db-sync-x95f9\" (UID: \"35b7e347-c4b6-450d-88c1-f352d4301fed\") " pod="openstack/keystone-db-sync-x95f9" Oct 14 13:31:56 crc kubenswrapper[4725]: I1014 13:31:56.046033 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35b7e347-c4b6-450d-88c1-f352d4301fed-combined-ca-bundle\") pod \"keystone-db-sync-x95f9\" (UID: \"35b7e347-c4b6-450d-88c1-f352d4301fed\") " pod="openstack/keystone-db-sync-x95f9" Oct 14 13:31:56 crc kubenswrapper[4725]: I1014 13:31:56.053039 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28752\" (UniqueName: \"kubernetes.io/projected/a34964eb-f117-4f77-a5f8-bbc22ae15966-kube-api-access-28752\") pod \"neutron-db-create-2p9qc\" (UID: \"a34964eb-f117-4f77-a5f8-bbc22ae15966\") " pod="openstack/neutron-db-create-2p9qc" Oct 14 13:31:56 crc kubenswrapper[4725]: I1014 13:31:56.061699 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlczh\" (UniqueName: \"kubernetes.io/projected/35b7e347-c4b6-450d-88c1-f352d4301fed-kube-api-access-nlczh\") pod \"keystone-db-sync-x95f9\" (UID: \"35b7e347-c4b6-450d-88c1-f352d4301fed\") " pod="openstack/keystone-db-sync-x95f9" Oct 14 13:31:56 crc kubenswrapper[4725]: I1014 13:31:56.138527 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x95f9" Oct 14 13:31:56 crc kubenswrapper[4725]: I1014 13:31:56.204667 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2p9qc" Oct 14 13:31:56 crc kubenswrapper[4725]: I1014 13:31:56.840133 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b115803-e57f-4651-8a38-9b1aece05cdf","Type":"ContainerStarted","Data":"3ab7aeaad0acee07f56bfa1afd0c73a3d8b35ff9d13d2c4ff6d829365e92705d"} Oct 14 13:31:57 crc kubenswrapper[4725]: I1014 13:31:57.119775 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-2p9qc"] Oct 14 13:31:57 crc kubenswrapper[4725]: I1014 13:31:57.174239 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-lspzt"] Oct 14 13:31:57 crc kubenswrapper[4725]: I1014 13:31:57.179724 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-xbh4s"] Oct 14 13:31:57 crc kubenswrapper[4725]: I1014 13:31:57.275402 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-x95f9"] Oct 14 13:31:57 crc kubenswrapper[4725]: W1014 13:31:57.337706 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35b7e347_c4b6_450d_88c1_f352d4301fed.slice/crio-1a407052e92c36abd26e532f4afdab217850d57e8f99fa489e163fcf7e5c85c9 WatchSource:0}: Error finding container 1a407052e92c36abd26e532f4afdab217850d57e8f99fa489e163fcf7e5c85c9: Status 404 returned error can't find the container with id 1a407052e92c36abd26e532f4afdab217850d57e8f99fa489e163fcf7e5c85c9 Oct 14 13:31:57 crc kubenswrapper[4725]: I1014 13:31:57.848980 4725 generic.go:334] "Generic (PLEG): container finished" podID="a34964eb-f117-4f77-a5f8-bbc22ae15966" containerID="ccd77cba56bcb481918438a50be2c8a2164a709fe2f4b8ba0be0f69f120fae38" exitCode=0 Oct 14 13:31:57 crc kubenswrapper[4725]: I1014 13:31:57.849326 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2p9qc" event={"ID":"a34964eb-f117-4f77-a5f8-bbc22ae15966","Type":"ContainerDied","Data":"ccd77cba56bcb481918438a50be2c8a2164a709fe2f4b8ba0be0f69f120fae38"} Oct 14 13:31:57 crc kubenswrapper[4725]: I1014 13:31:57.849351 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2p9qc" event={"ID":"a34964eb-f117-4f77-a5f8-bbc22ae15966","Type":"ContainerStarted","Data":"cc8d4c37a0062946762651941321575d269cf85ef68ce0c90b9ea8fe00786131"} Oct 14 13:31:57 crc kubenswrapper[4725]: I1014 13:31:57.851768 4725 generic.go:334] "Generic (PLEG): container finished" podID="2a3f32f8-bd1d-47c8-ab85-6ca3d5e571fe" containerID="12e1f99cec1847abbaa03479237d08cf7c04a9c377c0c7e4cd3e9ac509590f5e" exitCode=0 Oct 14 13:31:57 crc kubenswrapper[4725]: I1014 13:31:57.851829 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-lspzt" event={"ID":"2a3f32f8-bd1d-47c8-ab85-6ca3d5e571fe","Type":"ContainerDied","Data":"12e1f99cec1847abbaa03479237d08cf7c04a9c377c0c7e4cd3e9ac509590f5e"} Oct 14 13:31:57 crc kubenswrapper[4725]: I1014 13:31:57.851845 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-lspzt" event={"ID":"2a3f32f8-bd1d-47c8-ab85-6ca3d5e571fe","Type":"ContainerStarted","Data":"9475a7585ae7f17b365f15ce65828f6b346541be3dece0ebc6c37c59c497fb29"} Oct 14 13:31:57 crc kubenswrapper[4725]: I1014 13:31:57.858286 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b115803-e57f-4651-8a38-9b1aece05cdf","Type":"ContainerStarted","Data":"bb830af55edc75019846f60212e6d2a8d26bf8900461e3fd96fa3a12bb6c1e26"} Oct 14 13:31:57 crc kubenswrapper[4725]: I1014 13:31:57.858335 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b115803-e57f-4651-8a38-9b1aece05cdf","Type":"ContainerStarted","Data":"6b9eac0d356c848d4008cf66d315d44463c851684bbf167c71e13681ff9f9ff5"} Oct 14 13:31:57 crc kubenswrapper[4725]: I1014 13:31:57.858348 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b115803-e57f-4651-8a38-9b1aece05cdf","Type":"ContainerStarted","Data":"5caafd080c151189807b7cafd53428d2ad680dcf1433a0bb6df749547493c7a1"} Oct 14 13:31:57 crc kubenswrapper[4725]: I1014 13:31:57.860388 4725 generic.go:334] "Generic (PLEG): container finished" podID="9c91e6d9-e590-4467-81db-d9a571375693" containerID="bcd59c199f1ffb1f3eea3423994ce494df0c2dc612ee5d525905a069776ec8a7" exitCode=0 Oct 14 13:31:57 crc kubenswrapper[4725]: I1014 13:31:57.860461 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xbh4s" event={"ID":"9c91e6d9-e590-4467-81db-d9a571375693","Type":"ContainerDied","Data":"bcd59c199f1ffb1f3eea3423994ce494df0c2dc612ee5d525905a069776ec8a7"} Oct 14 13:31:57 crc kubenswrapper[4725]: I1014 13:31:57.860492 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xbh4s" event={"ID":"9c91e6d9-e590-4467-81db-d9a571375693","Type":"ContainerStarted","Data":"3843bd27522259f0cd1103bb5a9ccfb5de4bdbe2dc57e1980a432652696b64a5"} Oct 14 13:31:57 crc kubenswrapper[4725]: I1014 13:31:57.862200 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x95f9" event={"ID":"35b7e347-c4b6-450d-88c1-f352d4301fed","Type":"ContainerStarted","Data":"1a407052e92c36abd26e532f4afdab217850d57e8f99fa489e163fcf7e5c85c9"} Oct 14 13:31:57 crc kubenswrapper[4725]: I1014 13:31:57.863742 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5j9rx" event={"ID":"b32894cc-6bf3-46d4-981c-be6040373b59","Type":"ContainerStarted","Data":"4b5104dd6a19fb6f62cda6b685dbf82642a7eef3812133d369a0aeb4d69c21c3"} Oct 14 13:31:57 crc kubenswrapper[4725]: I1014 13:31:57.890471 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-5j9rx" podStartSLOduration=2.602900825 podStartE2EDuration="14.890434985s" podCreationTimestamp="2025-10-14 13:31:43 +0000 UTC" firstStartedPulling="2025-10-14 13:31:44.322068258 +0000 UTC m=+1021.170503077" lastFinishedPulling="2025-10-14 13:31:56.609602428 +0000 UTC m=+1033.458037237" observedRunningTime="2025-10-14 13:31:57.883285401 +0000 UTC m=+1034.731720210" watchObservedRunningTime="2025-10-14 13:31:57.890434985 +0000 UTC m=+1034.738869794" Oct 14 13:31:57 crc kubenswrapper[4725]: I1014 13:31:57.959733 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=29.52949232 podStartE2EDuration="37.959716379s" podCreationTimestamp="2025-10-14 13:31:20 +0000 UTC" firstStartedPulling="2025-10-14 13:31:37.775767839 +0000 UTC m=+1014.624202658" lastFinishedPulling="2025-10-14 13:31:46.205991908 +0000 UTC m=+1023.054426717" observedRunningTime="2025-10-14 13:31:57.941993107 +0000 UTC m=+1034.790427916" watchObservedRunningTime="2025-10-14 13:31:57.959716379 +0000 UTC m=+1034.808151188" Oct 14 13:31:58 crc kubenswrapper[4725]: I1014 13:31:58.229243 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-fl2mq"] Oct 14 13:31:58 crc kubenswrapper[4725]: I1014 13:31:58.231196 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-fl2mq" Oct 14 13:31:58 crc kubenswrapper[4725]: I1014 13:31:58.234629 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Oct 14 13:31:58 crc kubenswrapper[4725]: I1014 13:31:58.248345 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-fl2mq"] Oct 14 13:31:58 crc kubenswrapper[4725]: I1014 13:31:58.285715 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/053a41b6-63b9-4abc-af41-c865c8e232a3-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-fl2mq\" (UID: \"053a41b6-63b9-4abc-af41-c865c8e232a3\") " pod="openstack/dnsmasq-dns-77585f5f8c-fl2mq" Oct 14 13:31:58 crc kubenswrapper[4725]: I1014 13:31:58.285804 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/053a41b6-63b9-4abc-af41-c865c8e232a3-config\") pod \"dnsmasq-dns-77585f5f8c-fl2mq\" (UID: \"053a41b6-63b9-4abc-af41-c865c8e232a3\") " pod="openstack/dnsmasq-dns-77585f5f8c-fl2mq" Oct 14 13:31:58 crc kubenswrapper[4725]: I1014 13:31:58.285847 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/053a41b6-63b9-4abc-af41-c865c8e232a3-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-fl2mq\" (UID: \"053a41b6-63b9-4abc-af41-c865c8e232a3\") " pod="openstack/dnsmasq-dns-77585f5f8c-fl2mq" Oct 14 13:31:58 crc kubenswrapper[4725]: I1014 13:31:58.285898 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/053a41b6-63b9-4abc-af41-c865c8e232a3-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-fl2mq\" (UID: \"053a41b6-63b9-4abc-af41-c865c8e232a3\") " pod="openstack/dnsmasq-dns-77585f5f8c-fl2mq" Oct 14 13:31:58 crc kubenswrapper[4725]: I1014 13:31:58.285943 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7k9k\" (UniqueName: \"kubernetes.io/projected/053a41b6-63b9-4abc-af41-c865c8e232a3-kube-api-access-r7k9k\") pod \"dnsmasq-dns-77585f5f8c-fl2mq\" (UID: \"053a41b6-63b9-4abc-af41-c865c8e232a3\") " pod="openstack/dnsmasq-dns-77585f5f8c-fl2mq" Oct 14 13:31:58 crc kubenswrapper[4725]: I1014 13:31:58.285984 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/053a41b6-63b9-4abc-af41-c865c8e232a3-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-fl2mq\" (UID: \"053a41b6-63b9-4abc-af41-c865c8e232a3\") " pod="openstack/dnsmasq-dns-77585f5f8c-fl2mq" Oct 14 13:31:58 crc kubenswrapper[4725]: I1014 13:31:58.387854 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7k9k\" (UniqueName: \"kubernetes.io/projected/053a41b6-63b9-4abc-af41-c865c8e232a3-kube-api-access-r7k9k\") pod \"dnsmasq-dns-77585f5f8c-fl2mq\" (UID: \"053a41b6-63b9-4abc-af41-c865c8e232a3\") " pod="openstack/dnsmasq-dns-77585f5f8c-fl2mq" Oct 14 13:31:58 crc kubenswrapper[4725]: I1014 13:31:58.387935 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/053a41b6-63b9-4abc-af41-c865c8e232a3-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-fl2mq\" (UID: \"053a41b6-63b9-4abc-af41-c865c8e232a3\") " pod="openstack/dnsmasq-dns-77585f5f8c-fl2mq" Oct 14 13:31:58 crc kubenswrapper[4725]: I1014 13:31:58.388022 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/053a41b6-63b9-4abc-af41-c865c8e232a3-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-fl2mq\" (UID: \"053a41b6-63b9-4abc-af41-c865c8e232a3\") " pod="openstack/dnsmasq-dns-77585f5f8c-fl2mq" Oct 14 13:31:58 crc kubenswrapper[4725]: I1014 13:31:58.388087 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/053a41b6-63b9-4abc-af41-c865c8e232a3-config\") pod \"dnsmasq-dns-77585f5f8c-fl2mq\" (UID: \"053a41b6-63b9-4abc-af41-c865c8e232a3\") " pod="openstack/dnsmasq-dns-77585f5f8c-fl2mq" Oct 14 13:31:58 crc kubenswrapper[4725]: I1014 13:31:58.388267 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/053a41b6-63b9-4abc-af41-c865c8e232a3-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-fl2mq\" (UID: \"053a41b6-63b9-4abc-af41-c865c8e232a3\") " pod="openstack/dnsmasq-dns-77585f5f8c-fl2mq" Oct 14 13:31:58 crc kubenswrapper[4725]: I1014 13:31:58.389038 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/053a41b6-63b9-4abc-af41-c865c8e232a3-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-fl2mq\" (UID: \"053a41b6-63b9-4abc-af41-c865c8e232a3\") " pod="openstack/dnsmasq-dns-77585f5f8c-fl2mq" Oct 14 13:31:58 crc kubenswrapper[4725]: I1014 13:31:58.389309 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/053a41b6-63b9-4abc-af41-c865c8e232a3-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-fl2mq\" (UID: \"053a41b6-63b9-4abc-af41-c865c8e232a3\") " pod="openstack/dnsmasq-dns-77585f5f8c-fl2mq" Oct 14 13:31:58 crc kubenswrapper[4725]: I1014 13:31:58.389165 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/053a41b6-63b9-4abc-af41-c865c8e232a3-config\") pod \"dnsmasq-dns-77585f5f8c-fl2mq\" (UID: \"053a41b6-63b9-4abc-af41-c865c8e232a3\") " pod="openstack/dnsmasq-dns-77585f5f8c-fl2mq" Oct 14 13:31:58 crc kubenswrapper[4725]: I1014 13:31:58.389373 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/053a41b6-63b9-4abc-af41-c865c8e232a3-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-fl2mq\" (UID: \"053a41b6-63b9-4abc-af41-c865c8e232a3\") " pod="openstack/dnsmasq-dns-77585f5f8c-fl2mq" Oct 14 13:31:58 crc kubenswrapper[4725]: I1014 13:31:58.389665 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/053a41b6-63b9-4abc-af41-c865c8e232a3-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-fl2mq\" (UID: \"053a41b6-63b9-4abc-af41-c865c8e232a3\") " pod="openstack/dnsmasq-dns-77585f5f8c-fl2mq" Oct 14 13:31:58 crc kubenswrapper[4725]: I1014 13:31:58.395318 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/053a41b6-63b9-4abc-af41-c865c8e232a3-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-fl2mq\" (UID: \"053a41b6-63b9-4abc-af41-c865c8e232a3\") " pod="openstack/dnsmasq-dns-77585f5f8c-fl2mq" Oct 14 13:31:58 crc kubenswrapper[4725]: I1014 13:31:58.426090 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7k9k\" (UniqueName: \"kubernetes.io/projected/053a41b6-63b9-4abc-af41-c865c8e232a3-kube-api-access-r7k9k\") pod \"dnsmasq-dns-77585f5f8c-fl2mq\" (UID: \"053a41b6-63b9-4abc-af41-c865c8e232a3\") " pod="openstack/dnsmasq-dns-77585f5f8c-fl2mq" Oct 14 13:31:58 crc kubenswrapper[4725]: I1014 13:31:58.549623 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-fl2mq" Oct 14 13:31:58 crc kubenswrapper[4725]: I1014 13:31:58.788355 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-fl2mq"] Oct 14 13:31:58 crc kubenswrapper[4725]: I1014 13:31:58.872394 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-fl2mq" event={"ID":"053a41b6-63b9-4abc-af41-c865c8e232a3","Type":"ContainerStarted","Data":"ddc99c56fcc5d977cb295c02064c2ad0503021d048cff9f7e83c47c6963dffd8"} Oct 14 13:31:59 crc kubenswrapper[4725]: I1014 13:31:59.225580 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2p9qc" Oct 14 13:31:59 crc kubenswrapper[4725]: I1014 13:31:59.258797 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xbh4s" Oct 14 13:31:59 crc kubenswrapper[4725]: I1014 13:31:59.267661 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-lspzt" Oct 14 13:31:59 crc kubenswrapper[4725]: I1014 13:31:59.304644 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28752\" (UniqueName: \"kubernetes.io/projected/a34964eb-f117-4f77-a5f8-bbc22ae15966-kube-api-access-28752\") pod \"a34964eb-f117-4f77-a5f8-bbc22ae15966\" (UID: \"a34964eb-f117-4f77-a5f8-bbc22ae15966\") " Oct 14 13:31:59 crc kubenswrapper[4725]: I1014 13:31:59.304756 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrvjt\" (UniqueName: \"kubernetes.io/projected/2a3f32f8-bd1d-47c8-ab85-6ca3d5e571fe-kube-api-access-hrvjt\") pod \"2a3f32f8-bd1d-47c8-ab85-6ca3d5e571fe\" (UID: \"2a3f32f8-bd1d-47c8-ab85-6ca3d5e571fe\") " Oct 14 13:31:59 crc kubenswrapper[4725]: I1014 13:31:59.304776 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzdcs\" (UniqueName: \"kubernetes.io/projected/9c91e6d9-e590-4467-81db-d9a571375693-kube-api-access-hzdcs\") pod \"9c91e6d9-e590-4467-81db-d9a571375693\" (UID: \"9c91e6d9-e590-4467-81db-d9a571375693\") " Oct 14 13:31:59 crc kubenswrapper[4725]: I1014 13:31:59.309742 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c91e6d9-e590-4467-81db-d9a571375693-kube-api-access-hzdcs" (OuterVolumeSpecName: "kube-api-access-hzdcs") pod "9c91e6d9-e590-4467-81db-d9a571375693" (UID: "9c91e6d9-e590-4467-81db-d9a571375693"). InnerVolumeSpecName "kube-api-access-hzdcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:31:59 crc kubenswrapper[4725]: I1014 13:31:59.310238 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a34964eb-f117-4f77-a5f8-bbc22ae15966-kube-api-access-28752" (OuterVolumeSpecName: "kube-api-access-28752") pod "a34964eb-f117-4f77-a5f8-bbc22ae15966" (UID: "a34964eb-f117-4f77-a5f8-bbc22ae15966"). InnerVolumeSpecName "kube-api-access-28752". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:31:59 crc kubenswrapper[4725]: I1014 13:31:59.310321 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a3f32f8-bd1d-47c8-ab85-6ca3d5e571fe-kube-api-access-hrvjt" (OuterVolumeSpecName: "kube-api-access-hrvjt") pod "2a3f32f8-bd1d-47c8-ab85-6ca3d5e571fe" (UID: "2a3f32f8-bd1d-47c8-ab85-6ca3d5e571fe"). InnerVolumeSpecName "kube-api-access-hrvjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:31:59 crc kubenswrapper[4725]: I1014 13:31:59.406740 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrvjt\" (UniqueName: \"kubernetes.io/projected/2a3f32f8-bd1d-47c8-ab85-6ca3d5e571fe-kube-api-access-hrvjt\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:59 crc kubenswrapper[4725]: I1014 13:31:59.406791 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzdcs\" (UniqueName: \"kubernetes.io/projected/9c91e6d9-e590-4467-81db-d9a571375693-kube-api-access-hzdcs\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:59 crc kubenswrapper[4725]: I1014 13:31:59.406809 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28752\" (UniqueName: \"kubernetes.io/projected/a34964eb-f117-4f77-a5f8-bbc22ae15966-kube-api-access-28752\") on node \"crc\" DevicePath \"\"" Oct 14 13:31:59 crc kubenswrapper[4725]: I1014 13:31:59.885363 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xbh4s" Oct 14 13:31:59 crc kubenswrapper[4725]: I1014 13:31:59.885346 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xbh4s" event={"ID":"9c91e6d9-e590-4467-81db-d9a571375693","Type":"ContainerDied","Data":"3843bd27522259f0cd1103bb5a9ccfb5de4bdbe2dc57e1980a432652696b64a5"} Oct 14 13:31:59 crc kubenswrapper[4725]: I1014 13:31:59.885817 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3843bd27522259f0cd1103bb5a9ccfb5de4bdbe2dc57e1980a432652696b64a5" Oct 14 13:31:59 crc kubenswrapper[4725]: I1014 13:31:59.888278 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2p9qc" Oct 14 13:31:59 crc kubenswrapper[4725]: I1014 13:31:59.888276 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2p9qc" event={"ID":"a34964eb-f117-4f77-a5f8-bbc22ae15966","Type":"ContainerDied","Data":"cc8d4c37a0062946762651941321575d269cf85ef68ce0c90b9ea8fe00786131"} Oct 14 13:31:59 crc kubenswrapper[4725]: I1014 13:31:59.888325 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc8d4c37a0062946762651941321575d269cf85ef68ce0c90b9ea8fe00786131" Oct 14 13:31:59 crc kubenswrapper[4725]: I1014 13:31:59.889903 4725 generic.go:334] "Generic (PLEG): container finished" podID="053a41b6-63b9-4abc-af41-c865c8e232a3" containerID="7ece3fd56a52d99c89c0fe4eb1f7836f39b49e6c5a780bb8820158f0bc787a50" exitCode=0 Oct 14 13:31:59 crc kubenswrapper[4725]: I1014 13:31:59.890007 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-fl2mq" event={"ID":"053a41b6-63b9-4abc-af41-c865c8e232a3","Type":"ContainerDied","Data":"7ece3fd56a52d99c89c0fe4eb1f7836f39b49e6c5a780bb8820158f0bc787a50"} Oct 14 13:31:59 crc kubenswrapper[4725]: I1014 13:31:59.895767 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-lspzt" event={"ID":"2a3f32f8-bd1d-47c8-ab85-6ca3d5e571fe","Type":"ContainerDied","Data":"9475a7585ae7f17b365f15ce65828f6b346541be3dece0ebc6c37c59c497fb29"} Oct 14 13:31:59 crc kubenswrapper[4725]: I1014 13:31:59.895798 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9475a7585ae7f17b365f15ce65828f6b346541be3dece0ebc6c37c59c497fb29" Oct 14 13:31:59 crc kubenswrapper[4725]: I1014 13:31:59.896061 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-lspzt" Oct 14 13:32:00 crc kubenswrapper[4725]: I1014 13:32:00.904953 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-fl2mq" event={"ID":"053a41b6-63b9-4abc-af41-c865c8e232a3","Type":"ContainerStarted","Data":"5b5e7cf8d4448ba7eefe67c203130f2442d0d6163d0756d0b5534486e7e166da"} Oct 14 13:32:00 crc kubenswrapper[4725]: I1014 13:32:00.905224 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-fl2mq" Oct 14 13:32:00 crc kubenswrapper[4725]: I1014 13:32:00.925939 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77585f5f8c-fl2mq" podStartSLOduration=2.925918925 podStartE2EDuration="2.925918925s" podCreationTimestamp="2025-10-14 13:31:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:32:00.918637128 +0000 UTC m=+1037.767071937" watchObservedRunningTime="2025-10-14 13:32:00.925918925 +0000 UTC m=+1037.774353754" Oct 14 13:32:02 crc kubenswrapper[4725]: I1014 13:32:02.930068 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x95f9" event={"ID":"35b7e347-c4b6-450d-88c1-f352d4301fed","Type":"ContainerStarted","Data":"1add8eb3ccc3329b410231e970f2cdcd827d32cc14a769fea42289d6cd77b06a"} Oct 14 13:32:02 crc kubenswrapper[4725]: I1014 13:32:02.950423 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-x95f9" podStartSLOduration=2.818260842 podStartE2EDuration="7.950399528s" podCreationTimestamp="2025-10-14 13:31:55 +0000 UTC" firstStartedPulling="2025-10-14 13:31:57.339196012 +0000 UTC m=+1034.187630811" lastFinishedPulling="2025-10-14 13:32:02.471334688 +0000 UTC m=+1039.319769497" observedRunningTime="2025-10-14 13:32:02.948159847 +0000 UTC m=+1039.796594666" watchObservedRunningTime="2025-10-14 13:32:02.950399528 +0000 UTC m=+1039.798834377" Oct 14 13:32:03 crc kubenswrapper[4725]: I1014 13:32:03.940385 4725 generic.go:334] "Generic (PLEG): container finished" podID="b32894cc-6bf3-46d4-981c-be6040373b59" containerID="4b5104dd6a19fb6f62cda6b685dbf82642a7eef3812133d369a0aeb4d69c21c3" exitCode=0 Oct 14 13:32:03 crc kubenswrapper[4725]: I1014 13:32:03.940558 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5j9rx" event={"ID":"b32894cc-6bf3-46d4-981c-be6040373b59","Type":"ContainerDied","Data":"4b5104dd6a19fb6f62cda6b685dbf82642a7eef3812133d369a0aeb4d69c21c3"} Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.366278 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5j9rx" Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.404631 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b32894cc-6bf3-46d4-981c-be6040373b59-config-data\") pod \"b32894cc-6bf3-46d4-981c-be6040373b59\" (UID: \"b32894cc-6bf3-46d4-981c-be6040373b59\") " Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.404756 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b32894cc-6bf3-46d4-981c-be6040373b59-combined-ca-bundle\") pod \"b32894cc-6bf3-46d4-981c-be6040373b59\" (UID: \"b32894cc-6bf3-46d4-981c-be6040373b59\") " Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.404833 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b32894cc-6bf3-46d4-981c-be6040373b59-db-sync-config-data\") pod \"b32894cc-6bf3-46d4-981c-be6040373b59\" (UID: \"b32894cc-6bf3-46d4-981c-be6040373b59\") " Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.404913 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn7rz\" (UniqueName: \"kubernetes.io/projected/b32894cc-6bf3-46d4-981c-be6040373b59-kube-api-access-mn7rz\") pod \"b32894cc-6bf3-46d4-981c-be6040373b59\" (UID: \"b32894cc-6bf3-46d4-981c-be6040373b59\") " Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.418647 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b32894cc-6bf3-46d4-981c-be6040373b59-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b32894cc-6bf3-46d4-981c-be6040373b59" (UID: "b32894cc-6bf3-46d4-981c-be6040373b59"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.418803 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b32894cc-6bf3-46d4-981c-be6040373b59-kube-api-access-mn7rz" (OuterVolumeSpecName: "kube-api-access-mn7rz") pod "b32894cc-6bf3-46d4-981c-be6040373b59" (UID: "b32894cc-6bf3-46d4-981c-be6040373b59"). InnerVolumeSpecName "kube-api-access-mn7rz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.451123 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b32894cc-6bf3-46d4-981c-be6040373b59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b32894cc-6bf3-46d4-981c-be6040373b59" (UID: "b32894cc-6bf3-46d4-981c-be6040373b59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.465570 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b32894cc-6bf3-46d4-981c-be6040373b59-config-data" (OuterVolumeSpecName: "config-data") pod "b32894cc-6bf3-46d4-981c-be6040373b59" (UID: "b32894cc-6bf3-46d4-981c-be6040373b59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.506424 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b32894cc-6bf3-46d4-981c-be6040373b59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.506470 4725 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b32894cc-6bf3-46d4-981c-be6040373b59-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.506483 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn7rz\" (UniqueName: \"kubernetes.io/projected/b32894cc-6bf3-46d4-981c-be6040373b59-kube-api-access-mn7rz\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.506498 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b32894cc-6bf3-46d4-981c-be6040373b59-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.685786 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-5a30-account-create-r6ccp"] Oct 14 13:32:05 crc kubenswrapper[4725]: E1014 13:32:05.686123 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a3f32f8-bd1d-47c8-ab85-6ca3d5e571fe" containerName="mariadb-database-create" Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.686141 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a3f32f8-bd1d-47c8-ab85-6ca3d5e571fe" containerName="mariadb-database-create" Oct 14 13:32:05 crc kubenswrapper[4725]: E1014 13:32:05.686158 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a34964eb-f117-4f77-a5f8-bbc22ae15966" containerName="mariadb-database-create" Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.686165 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="a34964eb-f117-4f77-a5f8-bbc22ae15966" containerName="mariadb-database-create" Oct 14 13:32:05 crc kubenswrapper[4725]: E1014 13:32:05.686191 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c91e6d9-e590-4467-81db-d9a571375693" containerName="mariadb-database-create" Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.686198 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c91e6d9-e590-4467-81db-d9a571375693" containerName="mariadb-database-create" Oct 14 13:32:05 crc kubenswrapper[4725]: E1014 13:32:05.686208 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b32894cc-6bf3-46d4-981c-be6040373b59" containerName="glance-db-sync" Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.686213 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b32894cc-6bf3-46d4-981c-be6040373b59" containerName="glance-db-sync" Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.686359 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="a34964eb-f117-4f77-a5f8-bbc22ae15966" containerName="mariadb-database-create" Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.686383 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a3f32f8-bd1d-47c8-ab85-6ca3d5e571fe" containerName="mariadb-database-create" Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.686398 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c91e6d9-e590-4467-81db-d9a571375693" containerName="mariadb-database-create" Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.686408 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="b32894cc-6bf3-46d4-981c-be6040373b59" containerName="glance-db-sync" Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.687092 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5a30-account-create-r6ccp" Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.700845 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.714575 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5a30-account-create-r6ccp"] Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.768613 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-d39e-account-create-r5sdl"] Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.770040 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d39e-account-create-r5sdl" Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.772945 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.776883 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d39e-account-create-r5sdl"] Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.810426 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjzp4\" (UniqueName: \"kubernetes.io/projected/e18d471e-098d-4024-9e83-ed212464dad9-kube-api-access-tjzp4\") pod \"cinder-5a30-account-create-r6ccp\" (UID: \"e18d471e-098d-4024-9e83-ed212464dad9\") " pod="openstack/cinder-5a30-account-create-r6ccp" Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.810517 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp444\" (UniqueName: \"kubernetes.io/projected/80a33093-d85c-4406-9308-38a8b757040b-kube-api-access-dp444\") pod \"barbican-d39e-account-create-r5sdl\" (UID: \"80a33093-d85c-4406-9308-38a8b757040b\") " pod="openstack/barbican-d39e-account-create-r5sdl" Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.911914 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp444\" (UniqueName: \"kubernetes.io/projected/80a33093-d85c-4406-9308-38a8b757040b-kube-api-access-dp444\") pod \"barbican-d39e-account-create-r5sdl\" (UID: \"80a33093-d85c-4406-9308-38a8b757040b\") " pod="openstack/barbican-d39e-account-create-r5sdl" Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.912050 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjzp4\" (UniqueName: \"kubernetes.io/projected/e18d471e-098d-4024-9e83-ed212464dad9-kube-api-access-tjzp4\") pod \"cinder-5a30-account-create-r6ccp\" (UID: \"e18d471e-098d-4024-9e83-ed212464dad9\") " pod="openstack/cinder-5a30-account-create-r6ccp" Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.943999 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjzp4\" (UniqueName: \"kubernetes.io/projected/e18d471e-098d-4024-9e83-ed212464dad9-kube-api-access-tjzp4\") pod \"cinder-5a30-account-create-r6ccp\" (UID: \"e18d471e-098d-4024-9e83-ed212464dad9\") " pod="openstack/cinder-5a30-account-create-r6ccp" Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.945210 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp444\" (UniqueName: \"kubernetes.io/projected/80a33093-d85c-4406-9308-38a8b757040b-kube-api-access-dp444\") pod \"barbican-d39e-account-create-r5sdl\" (UID: \"80a33093-d85c-4406-9308-38a8b757040b\") " pod="openstack/barbican-d39e-account-create-r5sdl" Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.968793 4725 generic.go:334] "Generic (PLEG): container finished" podID="35b7e347-c4b6-450d-88c1-f352d4301fed" containerID="1add8eb3ccc3329b410231e970f2cdcd827d32cc14a769fea42289d6cd77b06a" exitCode=0 Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.968874 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x95f9" event={"ID":"35b7e347-c4b6-450d-88c1-f352d4301fed","Type":"ContainerDied","Data":"1add8eb3ccc3329b410231e970f2cdcd827d32cc14a769fea42289d6cd77b06a"} Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.970891 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5j9rx" event={"ID":"b32894cc-6bf3-46d4-981c-be6040373b59","Type":"ContainerDied","Data":"c546ca8f2e130132cbb2a7a83e8bfb42bc7e5a5529f4943646b1b4fa11524d5d"} Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.970926 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c546ca8f2e130132cbb2a7a83e8bfb42bc7e5a5529f4943646b1b4fa11524d5d" Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.970969 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5j9rx" Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.973069 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c12d-account-create-m5xhh"] Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.974249 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c12d-account-create-m5xhh" Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.975991 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Oct 14 13:32:05 crc kubenswrapper[4725]: I1014 13:32:05.992834 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c12d-account-create-m5xhh"] Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.024230 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5a30-account-create-r6ccp" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.026137 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfdwr\" (UniqueName: \"kubernetes.io/projected/553743dd-4497-4092-8723-3c03580db239-kube-api-access-kfdwr\") pod \"neutron-c12d-account-create-m5xhh\" (UID: \"553743dd-4497-4092-8723-3c03580db239\") " pod="openstack/neutron-c12d-account-create-m5xhh" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.093724 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d39e-account-create-r5sdl" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.127650 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfdwr\" (UniqueName: \"kubernetes.io/projected/553743dd-4497-4092-8723-3c03580db239-kube-api-access-kfdwr\") pod \"neutron-c12d-account-create-m5xhh\" (UID: \"553743dd-4497-4092-8723-3c03580db239\") " pod="openstack/neutron-c12d-account-create-m5xhh" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.155224 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfdwr\" (UniqueName: \"kubernetes.io/projected/553743dd-4497-4092-8723-3c03580db239-kube-api-access-kfdwr\") pod \"neutron-c12d-account-create-m5xhh\" (UID: \"553743dd-4497-4092-8723-3c03580db239\") " pod="openstack/neutron-c12d-account-create-m5xhh" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.292838 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-fl2mq"] Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.293058 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77585f5f8c-fl2mq" podUID="053a41b6-63b9-4abc-af41-c865c8e232a3" containerName="dnsmasq-dns" containerID="cri-o://5b5e7cf8d4448ba7eefe67c203130f2442d0d6163d0756d0b5534486e7e166da" gracePeriod=10 Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.302613 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77585f5f8c-fl2mq" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.340525 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-577tj"] Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.341986 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-577tj" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.348401 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c12d-account-create-m5xhh" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.365026 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-577tj"] Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.436114 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2618d426-3d62-4121-af03-d1ed1c23bf6e-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-577tj\" (UID: \"2618d426-3d62-4121-af03-d1ed1c23bf6e\") " pod="openstack/dnsmasq-dns-7ff5475cc9-577tj" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.436995 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2618d426-3d62-4121-af03-d1ed1c23bf6e-config\") pod \"dnsmasq-dns-7ff5475cc9-577tj\" (UID: \"2618d426-3d62-4121-af03-d1ed1c23bf6e\") " pod="openstack/dnsmasq-dns-7ff5475cc9-577tj" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.437179 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2618d426-3d62-4121-af03-d1ed1c23bf6e-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-577tj\" (UID: \"2618d426-3d62-4121-af03-d1ed1c23bf6e\") " pod="openstack/dnsmasq-dns-7ff5475cc9-577tj" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.437283 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtldc\" (UniqueName: \"kubernetes.io/projected/2618d426-3d62-4121-af03-d1ed1c23bf6e-kube-api-access-rtldc\") pod \"dnsmasq-dns-7ff5475cc9-577tj\" (UID: \"2618d426-3d62-4121-af03-d1ed1c23bf6e\") " pod="openstack/dnsmasq-dns-7ff5475cc9-577tj" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.437419 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2618d426-3d62-4121-af03-d1ed1c23bf6e-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-577tj\" (UID: \"2618d426-3d62-4121-af03-d1ed1c23bf6e\") " pod="openstack/dnsmasq-dns-7ff5475cc9-577tj" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.437527 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2618d426-3d62-4121-af03-d1ed1c23bf6e-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-577tj\" (UID: \"2618d426-3d62-4121-af03-d1ed1c23bf6e\") " pod="openstack/dnsmasq-dns-7ff5475cc9-577tj" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.523170 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5a30-account-create-r6ccp"] Oct 14 13:32:06 crc kubenswrapper[4725]: W1014 13:32:06.530662 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode18d471e_098d_4024_9e83_ed212464dad9.slice/crio-3e240b2023417277f8b8333bc2db7d8547058056b106fb2f20d5f2794b304fa3 WatchSource:0}: Error finding container 3e240b2023417277f8b8333bc2db7d8547058056b106fb2f20d5f2794b304fa3: Status 404 returned error can't find the container with id 3e240b2023417277f8b8333bc2db7d8547058056b106fb2f20d5f2794b304fa3 Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.540263 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2618d426-3d62-4121-af03-d1ed1c23bf6e-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-577tj\" (UID: \"2618d426-3d62-4121-af03-d1ed1c23bf6e\") " pod="openstack/dnsmasq-dns-7ff5475cc9-577tj" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.540314 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2618d426-3d62-4121-af03-d1ed1c23bf6e-config\") pod \"dnsmasq-dns-7ff5475cc9-577tj\" (UID: \"2618d426-3d62-4121-af03-d1ed1c23bf6e\") " pod="openstack/dnsmasq-dns-7ff5475cc9-577tj" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.540376 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2618d426-3d62-4121-af03-d1ed1c23bf6e-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-577tj\" (UID: \"2618d426-3d62-4121-af03-d1ed1c23bf6e\") " pod="openstack/dnsmasq-dns-7ff5475cc9-577tj" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.540399 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtldc\" (UniqueName: \"kubernetes.io/projected/2618d426-3d62-4121-af03-d1ed1c23bf6e-kube-api-access-rtldc\") pod \"dnsmasq-dns-7ff5475cc9-577tj\" (UID: \"2618d426-3d62-4121-af03-d1ed1c23bf6e\") " pod="openstack/dnsmasq-dns-7ff5475cc9-577tj" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.540437 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2618d426-3d62-4121-af03-d1ed1c23bf6e-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-577tj\" (UID: \"2618d426-3d62-4121-af03-d1ed1c23bf6e\") " pod="openstack/dnsmasq-dns-7ff5475cc9-577tj" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.540511 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2618d426-3d62-4121-af03-d1ed1c23bf6e-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-577tj\" (UID: \"2618d426-3d62-4121-af03-d1ed1c23bf6e\") " pod="openstack/dnsmasq-dns-7ff5475cc9-577tj" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.541612 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2618d426-3d62-4121-af03-d1ed1c23bf6e-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-577tj\" (UID: \"2618d426-3d62-4121-af03-d1ed1c23bf6e\") " pod="openstack/dnsmasq-dns-7ff5475cc9-577tj" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.541651 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2618d426-3d62-4121-af03-d1ed1c23bf6e-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-577tj\" (UID: \"2618d426-3d62-4121-af03-d1ed1c23bf6e\") " pod="openstack/dnsmasq-dns-7ff5475cc9-577tj" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.542029 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2618d426-3d62-4121-af03-d1ed1c23bf6e-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-577tj\" (UID: \"2618d426-3d62-4121-af03-d1ed1c23bf6e\") " pod="openstack/dnsmasq-dns-7ff5475cc9-577tj" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.542084 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2618d426-3d62-4121-af03-d1ed1c23bf6e-config\") pod \"dnsmasq-dns-7ff5475cc9-577tj\" (UID: \"2618d426-3d62-4121-af03-d1ed1c23bf6e\") " pod="openstack/dnsmasq-dns-7ff5475cc9-577tj" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.542683 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2618d426-3d62-4121-af03-d1ed1c23bf6e-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-577tj\" (UID: \"2618d426-3d62-4121-af03-d1ed1c23bf6e\") " pod="openstack/dnsmasq-dns-7ff5475cc9-577tj" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.560831 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtldc\" (UniqueName: \"kubernetes.io/projected/2618d426-3d62-4121-af03-d1ed1c23bf6e-kube-api-access-rtldc\") pod \"dnsmasq-dns-7ff5475cc9-577tj\" (UID: \"2618d426-3d62-4121-af03-d1ed1c23bf6e\") " pod="openstack/dnsmasq-dns-7ff5475cc9-577tj" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.683647 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-fl2mq" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.705846 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d39e-account-create-r5sdl"] Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.749811 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/053a41b6-63b9-4abc-af41-c865c8e232a3-dns-swift-storage-0\") pod \"053a41b6-63b9-4abc-af41-c865c8e232a3\" (UID: \"053a41b6-63b9-4abc-af41-c865c8e232a3\") " Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.749971 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/053a41b6-63b9-4abc-af41-c865c8e232a3-dns-svc\") pod \"053a41b6-63b9-4abc-af41-c865c8e232a3\" (UID: \"053a41b6-63b9-4abc-af41-c865c8e232a3\") " Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.750006 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/053a41b6-63b9-4abc-af41-c865c8e232a3-ovsdbserver-nb\") pod \"053a41b6-63b9-4abc-af41-c865c8e232a3\" (UID: \"053a41b6-63b9-4abc-af41-c865c8e232a3\") " Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.750056 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/053a41b6-63b9-4abc-af41-c865c8e232a3-config\") pod \"053a41b6-63b9-4abc-af41-c865c8e232a3\" (UID: \"053a41b6-63b9-4abc-af41-c865c8e232a3\") " Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.750123 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/053a41b6-63b9-4abc-af41-c865c8e232a3-ovsdbserver-sb\") pod \"053a41b6-63b9-4abc-af41-c865c8e232a3\" (UID: \"053a41b6-63b9-4abc-af41-c865c8e232a3\") " Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.750148 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7k9k\" (UniqueName: \"kubernetes.io/projected/053a41b6-63b9-4abc-af41-c865c8e232a3-kube-api-access-r7k9k\") pod \"053a41b6-63b9-4abc-af41-c865c8e232a3\" (UID: \"053a41b6-63b9-4abc-af41-c865c8e232a3\") " Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.754716 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/053a41b6-63b9-4abc-af41-c865c8e232a3-kube-api-access-r7k9k" (OuterVolumeSpecName: "kube-api-access-r7k9k") pod "053a41b6-63b9-4abc-af41-c865c8e232a3" (UID: "053a41b6-63b9-4abc-af41-c865c8e232a3"). InnerVolumeSpecName "kube-api-access-r7k9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.803131 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/053a41b6-63b9-4abc-af41-c865c8e232a3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "053a41b6-63b9-4abc-af41-c865c8e232a3" (UID: "053a41b6-63b9-4abc-af41-c865c8e232a3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.808869 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/053a41b6-63b9-4abc-af41-c865c8e232a3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "053a41b6-63b9-4abc-af41-c865c8e232a3" (UID: "053a41b6-63b9-4abc-af41-c865c8e232a3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.809544 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/053a41b6-63b9-4abc-af41-c865c8e232a3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "053a41b6-63b9-4abc-af41-c865c8e232a3" (UID: "053a41b6-63b9-4abc-af41-c865c8e232a3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.836502 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/053a41b6-63b9-4abc-af41-c865c8e232a3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "053a41b6-63b9-4abc-af41-c865c8e232a3" (UID: "053a41b6-63b9-4abc-af41-c865c8e232a3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.842224 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/053a41b6-63b9-4abc-af41-c865c8e232a3-config" (OuterVolumeSpecName: "config") pod "053a41b6-63b9-4abc-af41-c865c8e232a3" (UID: "053a41b6-63b9-4abc-af41-c865c8e232a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.852917 4725 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/053a41b6-63b9-4abc-af41-c865c8e232a3-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.852991 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/053a41b6-63b9-4abc-af41-c865c8e232a3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.853004 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/053a41b6-63b9-4abc-af41-c865c8e232a3-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.853012 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/053a41b6-63b9-4abc-af41-c865c8e232a3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.853022 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7k9k\" (UniqueName: \"kubernetes.io/projected/053a41b6-63b9-4abc-af41-c865c8e232a3-kube-api-access-r7k9k\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.853032 4725 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/053a41b6-63b9-4abc-af41-c865c8e232a3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.857103 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-577tj" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.891757 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c12d-account-create-m5xhh"] Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.982397 4725 generic.go:334] "Generic (PLEG): container finished" podID="053a41b6-63b9-4abc-af41-c865c8e232a3" containerID="5b5e7cf8d4448ba7eefe67c203130f2442d0d6163d0756d0b5534486e7e166da" exitCode=0 Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.982478 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-fl2mq" event={"ID":"053a41b6-63b9-4abc-af41-c865c8e232a3","Type":"ContainerDied","Data":"5b5e7cf8d4448ba7eefe67c203130f2442d0d6163d0756d0b5534486e7e166da"} Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.982505 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-fl2mq" event={"ID":"053a41b6-63b9-4abc-af41-c865c8e232a3","Type":"ContainerDied","Data":"ddc99c56fcc5d977cb295c02064c2ad0503021d048cff9f7e83c47c6963dffd8"} Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.982520 4725 scope.go:117] "RemoveContainer" containerID="5b5e7cf8d4448ba7eefe67c203130f2442d0d6163d0756d0b5534486e7e166da" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.982649 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-fl2mq" Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.986940 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c12d-account-create-m5xhh" event={"ID":"553743dd-4497-4092-8723-3c03580db239","Type":"ContainerStarted","Data":"152415dc925dce05ee304cf4c407c66e1b2d02493f4d29d0671bc623d608756b"} Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.988387 4725 generic.go:334] "Generic (PLEG): container finished" podID="80a33093-d85c-4406-9308-38a8b757040b" containerID="3a760e040e946e58c2caa0b2189ed2e47fe00bb9932bce387dc8a372eff04c60" exitCode=0 Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.988472 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d39e-account-create-r5sdl" event={"ID":"80a33093-d85c-4406-9308-38a8b757040b","Type":"ContainerDied","Data":"3a760e040e946e58c2caa0b2189ed2e47fe00bb9932bce387dc8a372eff04c60"} Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.988501 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d39e-account-create-r5sdl" event={"ID":"80a33093-d85c-4406-9308-38a8b757040b","Type":"ContainerStarted","Data":"771f25c4eb5c11634187e382eddf3ff455e9a04b2c7eb45376d551344d0741ba"} Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.990159 4725 generic.go:334] "Generic (PLEG): container finished" podID="e18d471e-098d-4024-9e83-ed212464dad9" containerID="0de015714ba68014cf3ba29e0b115fe0ab458bc94916393b0b696298470b7b26" exitCode=0 Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.991178 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5a30-account-create-r6ccp" event={"ID":"e18d471e-098d-4024-9e83-ed212464dad9","Type":"ContainerDied","Data":"0de015714ba68014cf3ba29e0b115fe0ab458bc94916393b0b696298470b7b26"} Oct 14 13:32:06 crc kubenswrapper[4725]: I1014 13:32:06.991217 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5a30-account-create-r6ccp" event={"ID":"e18d471e-098d-4024-9e83-ed212464dad9","Type":"ContainerStarted","Data":"3e240b2023417277f8b8333bc2db7d8547058056b106fb2f20d5f2794b304fa3"} Oct 14 13:32:07 crc kubenswrapper[4725]: I1014 13:32:07.062295 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-fl2mq"] Oct 14 13:32:07 crc kubenswrapper[4725]: I1014 13:32:07.064398 4725 scope.go:117] "RemoveContainer" containerID="7ece3fd56a52d99c89c0fe4eb1f7836f39b49e6c5a780bb8820158f0bc787a50" Oct 14 13:32:07 crc kubenswrapper[4725]: I1014 13:32:07.073650 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-fl2mq"] Oct 14 13:32:07 crc kubenswrapper[4725]: I1014 13:32:07.088667 4725 scope.go:117] "RemoveContainer" containerID="5b5e7cf8d4448ba7eefe67c203130f2442d0d6163d0756d0b5534486e7e166da" Oct 14 13:32:07 crc kubenswrapper[4725]: E1014 13:32:07.089828 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b5e7cf8d4448ba7eefe67c203130f2442d0d6163d0756d0b5534486e7e166da\": container with ID starting with 5b5e7cf8d4448ba7eefe67c203130f2442d0d6163d0756d0b5534486e7e166da not found: ID does not exist" containerID="5b5e7cf8d4448ba7eefe67c203130f2442d0d6163d0756d0b5534486e7e166da" Oct 14 13:32:07 crc kubenswrapper[4725]: I1014 13:32:07.089852 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b5e7cf8d4448ba7eefe67c203130f2442d0d6163d0756d0b5534486e7e166da"} err="failed to get container status \"5b5e7cf8d4448ba7eefe67c203130f2442d0d6163d0756d0b5534486e7e166da\": rpc error: code = NotFound desc = could not find container \"5b5e7cf8d4448ba7eefe67c203130f2442d0d6163d0756d0b5534486e7e166da\": container with ID starting with 5b5e7cf8d4448ba7eefe67c203130f2442d0d6163d0756d0b5534486e7e166da not found: ID does not exist" Oct 14 13:32:07 crc kubenswrapper[4725]: I1014 13:32:07.089873 4725 scope.go:117] "RemoveContainer" containerID="7ece3fd56a52d99c89c0fe4eb1f7836f39b49e6c5a780bb8820158f0bc787a50" Oct 14 13:32:07 crc kubenswrapper[4725]: E1014 13:32:07.092075 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ece3fd56a52d99c89c0fe4eb1f7836f39b49e6c5a780bb8820158f0bc787a50\": container with ID starting with 7ece3fd56a52d99c89c0fe4eb1f7836f39b49e6c5a780bb8820158f0bc787a50 not found: ID does not exist" containerID="7ece3fd56a52d99c89c0fe4eb1f7836f39b49e6c5a780bb8820158f0bc787a50" Oct 14 13:32:07 crc kubenswrapper[4725]: I1014 13:32:07.092099 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ece3fd56a52d99c89c0fe4eb1f7836f39b49e6c5a780bb8820158f0bc787a50"} err="failed to get container status \"7ece3fd56a52d99c89c0fe4eb1f7836f39b49e6c5a780bb8820158f0bc787a50\": rpc error: code = NotFound desc = could not find container \"7ece3fd56a52d99c89c0fe4eb1f7836f39b49e6c5a780bb8820158f0bc787a50\": container with ID starting with 7ece3fd56a52d99c89c0fe4eb1f7836f39b49e6c5a780bb8820158f0bc787a50 not found: ID does not exist" Oct 14 13:32:07 crc kubenswrapper[4725]: I1014 13:32:07.267771 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-577tj"] Oct 14 13:32:07 crc kubenswrapper[4725]: I1014 13:32:07.268056 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x95f9" Oct 14 13:32:07 crc kubenswrapper[4725]: W1014 13:32:07.270550 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2618d426_3d62_4121_af03_d1ed1c23bf6e.slice/crio-3854f44335e99cdd7598a19778304fb1ad8d42c8339f16ea9d46f671501b07be WatchSource:0}: Error finding container 3854f44335e99cdd7598a19778304fb1ad8d42c8339f16ea9d46f671501b07be: Status 404 returned error can't find the container with id 3854f44335e99cdd7598a19778304fb1ad8d42c8339f16ea9d46f671501b07be Oct 14 13:32:07 crc kubenswrapper[4725]: I1014 13:32:07.361210 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlczh\" (UniqueName: \"kubernetes.io/projected/35b7e347-c4b6-450d-88c1-f352d4301fed-kube-api-access-nlczh\") pod \"35b7e347-c4b6-450d-88c1-f352d4301fed\" (UID: \"35b7e347-c4b6-450d-88c1-f352d4301fed\") " Oct 14 13:32:07 crc kubenswrapper[4725]: I1014 13:32:07.361281 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35b7e347-c4b6-450d-88c1-f352d4301fed-config-data\") pod \"35b7e347-c4b6-450d-88c1-f352d4301fed\" (UID: \"35b7e347-c4b6-450d-88c1-f352d4301fed\") " Oct 14 13:32:07 crc kubenswrapper[4725]: I1014 13:32:07.361338 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35b7e347-c4b6-450d-88c1-f352d4301fed-combined-ca-bundle\") pod \"35b7e347-c4b6-450d-88c1-f352d4301fed\" (UID: \"35b7e347-c4b6-450d-88c1-f352d4301fed\") " Oct 14 13:32:07 crc kubenswrapper[4725]: I1014 13:32:07.366828 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35b7e347-c4b6-450d-88c1-f352d4301fed-kube-api-access-nlczh" (OuterVolumeSpecName: "kube-api-access-nlczh") pod "35b7e347-c4b6-450d-88c1-f352d4301fed" (UID: "35b7e347-c4b6-450d-88c1-f352d4301fed"). InnerVolumeSpecName "kube-api-access-nlczh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:32:07 crc kubenswrapper[4725]: I1014 13:32:07.389006 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35b7e347-c4b6-450d-88c1-f352d4301fed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35b7e347-c4b6-450d-88c1-f352d4301fed" (UID: "35b7e347-c4b6-450d-88c1-f352d4301fed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:07 crc kubenswrapper[4725]: I1014 13:32:07.421548 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35b7e347-c4b6-450d-88c1-f352d4301fed-config-data" (OuterVolumeSpecName: "config-data") pod "35b7e347-c4b6-450d-88c1-f352d4301fed" (UID: "35b7e347-c4b6-450d-88c1-f352d4301fed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:07 crc kubenswrapper[4725]: I1014 13:32:07.463514 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35b7e347-c4b6-450d-88c1-f352d4301fed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:07 crc kubenswrapper[4725]: I1014 13:32:07.463559 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlczh\" (UniqueName: \"kubernetes.io/projected/35b7e347-c4b6-450d-88c1-f352d4301fed-kube-api-access-nlczh\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:07 crc kubenswrapper[4725]: I1014 13:32:07.463572 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35b7e347-c4b6-450d-88c1-f352d4301fed-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:07 crc kubenswrapper[4725]: I1014 13:32:07.945302 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="053a41b6-63b9-4abc-af41-c865c8e232a3" path="/var/lib/kubelet/pods/053a41b6-63b9-4abc-af41-c865c8e232a3/volumes" Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.003052 4725 generic.go:334] "Generic (PLEG): container finished" podID="553743dd-4497-4092-8723-3c03580db239" containerID="35086069c239f4b52ceaa888fba55e318a12b66b854c9a72abd62f6499dbac7e" exitCode=0 Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.003146 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c12d-account-create-m5xhh" event={"ID":"553743dd-4497-4092-8723-3c03580db239","Type":"ContainerDied","Data":"35086069c239f4b52ceaa888fba55e318a12b66b854c9a72abd62f6499dbac7e"} Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.008411 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x95f9" Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.008817 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x95f9" event={"ID":"35b7e347-c4b6-450d-88c1-f352d4301fed","Type":"ContainerDied","Data":"1a407052e92c36abd26e532f4afdab217850d57e8f99fa489e163fcf7e5c85c9"} Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.009015 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a407052e92c36abd26e532f4afdab217850d57e8f99fa489e163fcf7e5c85c9" Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.010769 4725 generic.go:334] "Generic (PLEG): container finished" podID="2618d426-3d62-4121-af03-d1ed1c23bf6e" containerID="3252e9437e0e012b8f59c7dd93ed1b367aad422eb8cf0d2e6e8dcd109b1fea08" exitCode=0 Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.010856 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-577tj" event={"ID":"2618d426-3d62-4121-af03-d1ed1c23bf6e","Type":"ContainerDied","Data":"3252e9437e0e012b8f59c7dd93ed1b367aad422eb8cf0d2e6e8dcd109b1fea08"} Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.010920 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-577tj" event={"ID":"2618d426-3d62-4121-af03-d1ed1c23bf6e","Type":"ContainerStarted","Data":"3854f44335e99cdd7598a19778304fb1ad8d42c8339f16ea9d46f671501b07be"} Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.210355 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-577tj"] Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.235309 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-2sx29"] Oct 14 13:32:08 crc kubenswrapper[4725]: E1014 13:32:08.235665 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35b7e347-c4b6-450d-88c1-f352d4301fed" containerName="keystone-db-sync" Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.235681 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="35b7e347-c4b6-450d-88c1-f352d4301fed" containerName="keystone-db-sync" Oct 14 13:32:08 crc kubenswrapper[4725]: E1014 13:32:08.235707 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="053a41b6-63b9-4abc-af41-c865c8e232a3" containerName="init" Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.235713 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="053a41b6-63b9-4abc-af41-c865c8e232a3" containerName="init" Oct 14 13:32:08 crc kubenswrapper[4725]: E1014 13:32:08.235726 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="053a41b6-63b9-4abc-af41-c865c8e232a3" containerName="dnsmasq-dns" Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.235733 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="053a41b6-63b9-4abc-af41-c865c8e232a3" containerName="dnsmasq-dns" Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.235882 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="053a41b6-63b9-4abc-af41-c865c8e232a3" containerName="dnsmasq-dns" Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.235903 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="35b7e347-c4b6-450d-88c1-f352d4301fed" containerName="keystone-db-sync" Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.236477 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2sx29" Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.242433 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.242848 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-h7l4t" Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.243300 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.247262 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.251829 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-rrb4t"] Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.253717 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-rrb4t" Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.258724 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-rrb4t"] Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.269468 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2sx29"] Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.281696 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/16644f47-970c-4ed8-b44d-04a8f4765a63-credential-keys\") pod \"keystone-bootstrap-2sx29\" (UID: \"16644f47-970c-4ed8-b44d-04a8f4765a63\") " pod="openstack/keystone-bootstrap-2sx29" Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.281749 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e60f2f91-ff70-4ffb-86a1-653403235ef3-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-rrb4t\" (UID: \"e60f2f91-ff70-4ffb-86a1-653403235ef3\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-rrb4t" Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.281777 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16644f47-970c-4ed8-b44d-04a8f4765a63-fernet-keys\") pod \"keystone-bootstrap-2sx29\" (UID: \"16644f47-970c-4ed8-b44d-04a8f4765a63\") " pod="openstack/keystone-bootstrap-2sx29" Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.281861 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e60f2f91-ff70-4ffb-86a1-653403235ef3-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-rrb4t\" (UID: \"e60f2f91-ff70-4ffb-86a1-653403235ef3\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-rrb4t" Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.281951 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16644f47-970c-4ed8-b44d-04a8f4765a63-scripts\") pod \"keystone-bootstrap-2sx29\" (UID: \"16644f47-970c-4ed8-b44d-04a8f4765a63\") " pod="openstack/keystone-bootstrap-2sx29" Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.281978 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws8tk\" (UniqueName: \"kubernetes.io/projected/e60f2f91-ff70-4ffb-86a1-653403235ef3-kube-api-access-ws8tk\") pod \"dnsmasq-dns-5c5cc7c5ff-rrb4t\" (UID: \"e60f2f91-ff70-4ffb-86a1-653403235ef3\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-rrb4t" Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.282014 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e60f2f91-ff70-4ffb-86a1-653403235ef3-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-rrb4t\" (UID: \"e60f2f91-ff70-4ffb-86a1-653403235ef3\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-rrb4t" Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.282037 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e60f2f91-ff70-4ffb-86a1-653403235ef3-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-rrb4t\" (UID: \"e60f2f91-ff70-4ffb-86a1-653403235ef3\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-rrb4t" Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.282134 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16644f47-970c-4ed8-b44d-04a8f4765a63-combined-ca-bundle\") pod \"keystone-bootstrap-2sx29\" (UID: \"16644f47-970c-4ed8-b44d-04a8f4765a63\") " pod="openstack/keystone-bootstrap-2sx29" Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.282151 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e60f2f91-ff70-4ffb-86a1-653403235ef3-config\") pod \"dnsmasq-dns-5c5cc7c5ff-rrb4t\" (UID: \"e60f2f91-ff70-4ffb-86a1-653403235ef3\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-rrb4t" Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.282191 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmfrq\" (UniqueName: \"kubernetes.io/projected/16644f47-970c-4ed8-b44d-04a8f4765a63-kube-api-access-zmfrq\") pod \"keystone-bootstrap-2sx29\" (UID: \"16644f47-970c-4ed8-b44d-04a8f4765a63\") " pod="openstack/keystone-bootstrap-2sx29" Oct 14 13:32:08 crc kubenswrapper[4725]: I1014 13:32:08.282219 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16644f47-970c-4ed8-b44d-04a8f4765a63-config-data\") pod \"keystone-bootstrap-2sx29\" (UID: \"16644f47-970c-4ed8-b44d-04a8f4765a63\") " pod="openstack/keystone-bootstrap-2sx29" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.357876 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-684dcc5995-mbkf4"] Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.359167 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-684dcc5995-mbkf4" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.361691 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-74bgg" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.366064 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.366725 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.366963 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.394176 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/16644f47-970c-4ed8-b44d-04a8f4765a63-credential-keys\") pod \"keystone-bootstrap-2sx29\" (UID: \"16644f47-970c-4ed8-b44d-04a8f4765a63\") " pod="openstack/keystone-bootstrap-2sx29" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.394229 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e60f2f91-ff70-4ffb-86a1-653403235ef3-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-rrb4t\" (UID: \"e60f2f91-ff70-4ffb-86a1-653403235ef3\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-rrb4t" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.394258 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16644f47-970c-4ed8-b44d-04a8f4765a63-fernet-keys\") pod \"keystone-bootstrap-2sx29\" (UID: \"16644f47-970c-4ed8-b44d-04a8f4765a63\") " pod="openstack/keystone-bootstrap-2sx29" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.394287 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e60f2f91-ff70-4ffb-86a1-653403235ef3-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-rrb4t\" (UID: \"e60f2f91-ff70-4ffb-86a1-653403235ef3\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-rrb4t" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.394325 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16644f47-970c-4ed8-b44d-04a8f4765a63-scripts\") pod \"keystone-bootstrap-2sx29\" (UID: \"16644f47-970c-4ed8-b44d-04a8f4765a63\") " pod="openstack/keystone-bootstrap-2sx29" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.394344 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws8tk\" (UniqueName: \"kubernetes.io/projected/e60f2f91-ff70-4ffb-86a1-653403235ef3-kube-api-access-ws8tk\") pod \"dnsmasq-dns-5c5cc7c5ff-rrb4t\" (UID: \"e60f2f91-ff70-4ffb-86a1-653403235ef3\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-rrb4t" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.394361 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2xjc\" (UniqueName: \"kubernetes.io/projected/486ea1cf-7023-407b-bb7f-c67ce34c42aa-kube-api-access-p2xjc\") pod \"horizon-684dcc5995-mbkf4\" (UID: \"486ea1cf-7023-407b-bb7f-c67ce34c42aa\") " pod="openstack/horizon-684dcc5995-mbkf4" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.394383 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/486ea1cf-7023-407b-bb7f-c67ce34c42aa-horizon-secret-key\") pod \"horizon-684dcc5995-mbkf4\" (UID: \"486ea1cf-7023-407b-bb7f-c67ce34c42aa\") " pod="openstack/horizon-684dcc5995-mbkf4" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.394403 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e60f2f91-ff70-4ffb-86a1-653403235ef3-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-rrb4t\" (UID: \"e60f2f91-ff70-4ffb-86a1-653403235ef3\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-rrb4t" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.394424 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e60f2f91-ff70-4ffb-86a1-653403235ef3-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-rrb4t\" (UID: \"e60f2f91-ff70-4ffb-86a1-653403235ef3\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-rrb4t" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.394462 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/486ea1cf-7023-407b-bb7f-c67ce34c42aa-logs\") pod \"horizon-684dcc5995-mbkf4\" (UID: \"486ea1cf-7023-407b-bb7f-c67ce34c42aa\") " pod="openstack/horizon-684dcc5995-mbkf4" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.394501 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16644f47-970c-4ed8-b44d-04a8f4765a63-combined-ca-bundle\") pod \"keystone-bootstrap-2sx29\" (UID: \"16644f47-970c-4ed8-b44d-04a8f4765a63\") " pod="openstack/keystone-bootstrap-2sx29" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.394518 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e60f2f91-ff70-4ffb-86a1-653403235ef3-config\") pod \"dnsmasq-dns-5c5cc7c5ff-rrb4t\" (UID: \"e60f2f91-ff70-4ffb-86a1-653403235ef3\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-rrb4t" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.394538 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/486ea1cf-7023-407b-bb7f-c67ce34c42aa-scripts\") pod \"horizon-684dcc5995-mbkf4\" (UID: \"486ea1cf-7023-407b-bb7f-c67ce34c42aa\") " pod="openstack/horizon-684dcc5995-mbkf4" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.394566 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmfrq\" (UniqueName: \"kubernetes.io/projected/16644f47-970c-4ed8-b44d-04a8f4765a63-kube-api-access-zmfrq\") pod \"keystone-bootstrap-2sx29\" (UID: \"16644f47-970c-4ed8-b44d-04a8f4765a63\") " pod="openstack/keystone-bootstrap-2sx29" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.394598 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/486ea1cf-7023-407b-bb7f-c67ce34c42aa-config-data\") pod \"horizon-684dcc5995-mbkf4\" (UID: \"486ea1cf-7023-407b-bb7f-c67ce34c42aa\") " pod="openstack/horizon-684dcc5995-mbkf4" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.394614 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16644f47-970c-4ed8-b44d-04a8f4765a63-config-data\") pod \"keystone-bootstrap-2sx29\" (UID: \"16644f47-970c-4ed8-b44d-04a8f4765a63\") " pod="openstack/keystone-bootstrap-2sx29" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.399047 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e60f2f91-ff70-4ffb-86a1-653403235ef3-config\") pod \"dnsmasq-dns-5c5cc7c5ff-rrb4t\" (UID: \"e60f2f91-ff70-4ffb-86a1-653403235ef3\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-rrb4t" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.399912 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e60f2f91-ff70-4ffb-86a1-653403235ef3-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-rrb4t\" (UID: \"e60f2f91-ff70-4ffb-86a1-653403235ef3\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-rrb4t" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.401348 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16644f47-970c-4ed8-b44d-04a8f4765a63-config-data\") pod \"keystone-bootstrap-2sx29\" (UID: \"16644f47-970c-4ed8-b44d-04a8f4765a63\") " pod="openstack/keystone-bootstrap-2sx29" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.402084 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e60f2f91-ff70-4ffb-86a1-653403235ef3-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-rrb4t\" (UID: \"e60f2f91-ff70-4ffb-86a1-653403235ef3\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-rrb4t" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.402812 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/16644f47-970c-4ed8-b44d-04a8f4765a63-credential-keys\") pod \"keystone-bootstrap-2sx29\" (UID: \"16644f47-970c-4ed8-b44d-04a8f4765a63\") " pod="openstack/keystone-bootstrap-2sx29" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.403195 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e60f2f91-ff70-4ffb-86a1-653403235ef3-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-rrb4t\" (UID: \"e60f2f91-ff70-4ffb-86a1-653403235ef3\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-rrb4t" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.403717 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e60f2f91-ff70-4ffb-86a1-653403235ef3-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-rrb4t\" (UID: \"e60f2f91-ff70-4ffb-86a1-653403235ef3\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-rrb4t" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.419967 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-684dcc5995-mbkf4"] Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.431637 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16644f47-970c-4ed8-b44d-04a8f4765a63-combined-ca-bundle\") pod \"keystone-bootstrap-2sx29\" (UID: \"16644f47-970c-4ed8-b44d-04a8f4765a63\") " pod="openstack/keystone-bootstrap-2sx29" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.434407 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16644f47-970c-4ed8-b44d-04a8f4765a63-fernet-keys\") pod \"keystone-bootstrap-2sx29\" (UID: \"16644f47-970c-4ed8-b44d-04a8f4765a63\") " pod="openstack/keystone-bootstrap-2sx29" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.479511 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws8tk\" (UniqueName: \"kubernetes.io/projected/e60f2f91-ff70-4ffb-86a1-653403235ef3-kube-api-access-ws8tk\") pod \"dnsmasq-dns-5c5cc7c5ff-rrb4t\" (UID: \"e60f2f91-ff70-4ffb-86a1-653403235ef3\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-rrb4t" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.480074 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmfrq\" (UniqueName: \"kubernetes.io/projected/16644f47-970c-4ed8-b44d-04a8f4765a63-kube-api-access-zmfrq\") pod \"keystone-bootstrap-2sx29\" (UID: \"16644f47-970c-4ed8-b44d-04a8f4765a63\") " pod="openstack/keystone-bootstrap-2sx29" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.489640 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16644f47-970c-4ed8-b44d-04a8f4765a63-scripts\") pod \"keystone-bootstrap-2sx29\" (UID: \"16644f47-970c-4ed8-b44d-04a8f4765a63\") " pod="openstack/keystone-bootstrap-2sx29" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.511614 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2xjc\" (UniqueName: \"kubernetes.io/projected/486ea1cf-7023-407b-bb7f-c67ce34c42aa-kube-api-access-p2xjc\") pod \"horizon-684dcc5995-mbkf4\" (UID: \"486ea1cf-7023-407b-bb7f-c67ce34c42aa\") " pod="openstack/horizon-684dcc5995-mbkf4" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.511661 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/486ea1cf-7023-407b-bb7f-c67ce34c42aa-horizon-secret-key\") pod \"horizon-684dcc5995-mbkf4\" (UID: \"486ea1cf-7023-407b-bb7f-c67ce34c42aa\") " pod="openstack/horizon-684dcc5995-mbkf4" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.511695 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/486ea1cf-7023-407b-bb7f-c67ce34c42aa-logs\") pod \"horizon-684dcc5995-mbkf4\" (UID: \"486ea1cf-7023-407b-bb7f-c67ce34c42aa\") " pod="openstack/horizon-684dcc5995-mbkf4" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.511748 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/486ea1cf-7023-407b-bb7f-c67ce34c42aa-scripts\") pod \"horizon-684dcc5995-mbkf4\" (UID: \"486ea1cf-7023-407b-bb7f-c67ce34c42aa\") " pod="openstack/horizon-684dcc5995-mbkf4" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.511783 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/486ea1cf-7023-407b-bb7f-c67ce34c42aa-config-data\") pod \"horizon-684dcc5995-mbkf4\" (UID: \"486ea1cf-7023-407b-bb7f-c67ce34c42aa\") " pod="openstack/horizon-684dcc5995-mbkf4" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.514367 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/486ea1cf-7023-407b-bb7f-c67ce34c42aa-config-data\") pod \"horizon-684dcc5995-mbkf4\" (UID: \"486ea1cf-7023-407b-bb7f-c67ce34c42aa\") " pod="openstack/horizon-684dcc5995-mbkf4" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.517602 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/486ea1cf-7023-407b-bb7f-c67ce34c42aa-horizon-secret-key\") pod \"horizon-684dcc5995-mbkf4\" (UID: \"486ea1cf-7023-407b-bb7f-c67ce34c42aa\") " pod="openstack/horizon-684dcc5995-mbkf4" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.517881 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/486ea1cf-7023-407b-bb7f-c67ce34c42aa-logs\") pod \"horizon-684dcc5995-mbkf4\" (UID: \"486ea1cf-7023-407b-bb7f-c67ce34c42aa\") " pod="openstack/horizon-684dcc5995-mbkf4" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.518307 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/486ea1cf-7023-407b-bb7f-c67ce34c42aa-scripts\") pod \"horizon-684dcc5995-mbkf4\" (UID: \"486ea1cf-7023-407b-bb7f-c67ce34c42aa\") " pod="openstack/horizon-684dcc5995-mbkf4" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.543739 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2xjc\" (UniqueName: \"kubernetes.io/projected/486ea1cf-7023-407b-bb7f-c67ce34c42aa-kube-api-access-p2xjc\") pod \"horizon-684dcc5995-mbkf4\" (UID: \"486ea1cf-7023-407b-bb7f-c67ce34c42aa\") " pod="openstack/horizon-684dcc5995-mbkf4" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.543890 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6f49767877-s5fb5"] Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.616731 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-684dcc5995-mbkf4" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.627741 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-rrb4t" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.632333 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2sx29" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.638092 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f49767877-s5fb5"] Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.638198 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f49767877-s5fb5" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.669524 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-fgqxw"] Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.711506 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-fgqxw" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.755427 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-cpxkv" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.755427 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6316dc0c-436a-4cb3-86ae-0d073a62980e-logs\") pod \"horizon-6f49767877-s5fb5\" (UID: \"6316dc0c-436a-4cb3-86ae-0d073a62980e\") " pod="openstack/horizon-6f49767877-s5fb5" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.755774 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6316dc0c-436a-4cb3-86ae-0d073a62980e-config-data\") pod \"horizon-6f49767877-s5fb5\" (UID: \"6316dc0c-436a-4cb3-86ae-0d073a62980e\") " pod="openstack/horizon-6f49767877-s5fb5" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.755835 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6316dc0c-436a-4cb3-86ae-0d073a62980e-scripts\") pod \"horizon-6f49767877-s5fb5\" (UID: \"6316dc0c-436a-4cb3-86ae-0d073a62980e\") " pod="openstack/horizon-6f49767877-s5fb5" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.755919 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brqf2\" (UniqueName: \"kubernetes.io/projected/6316dc0c-436a-4cb3-86ae-0d073a62980e-kube-api-access-brqf2\") pod \"horizon-6f49767877-s5fb5\" (UID: \"6316dc0c-436a-4cb3-86ae-0d073a62980e\") " pod="openstack/horizon-6f49767877-s5fb5" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.755947 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6316dc0c-436a-4cb3-86ae-0d073a62980e-horizon-secret-key\") pod \"horizon-6f49767877-s5fb5\" (UID: \"6316dc0c-436a-4cb3-86ae-0d073a62980e\") " pod="openstack/horizon-6f49767877-s5fb5" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.757274 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.757337 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.765538 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-rrb4t"] Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.767263 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d39e-account-create-r5sdl" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.806879 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-fgqxw"] Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.830867 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5a30-account-create-r6ccp" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.832124 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 13:32:09 crc kubenswrapper[4725]: E1014 13:32:08.832441 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e18d471e-098d-4024-9e83-ed212464dad9" containerName="mariadb-account-create" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.832465 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="e18d471e-098d-4024-9e83-ed212464dad9" containerName="mariadb-account-create" Oct 14 13:32:09 crc kubenswrapper[4725]: E1014 13:32:08.832504 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80a33093-d85c-4406-9308-38a8b757040b" containerName="mariadb-account-create" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.832514 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="80a33093-d85c-4406-9308-38a8b757040b" containerName="mariadb-account-create" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.832666 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="e18d471e-098d-4024-9e83-ed212464dad9" containerName="mariadb-account-create" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.832679 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="80a33093-d85c-4406-9308-38a8b757040b" containerName="mariadb-account-create" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.833535 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.839696 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.839772 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-p8cwt" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.840020 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.841499 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-n8mrx"] Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.843496 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-n8mrx" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.857489 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6316dc0c-436a-4cb3-86ae-0d073a62980e-logs\") pod \"horizon-6f49767877-s5fb5\" (UID: \"6316dc0c-436a-4cb3-86ae-0d073a62980e\") " pod="openstack/horizon-6f49767877-s5fb5" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.857537 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85dsd\" (UniqueName: \"kubernetes.io/projected/7d968db2-d49e-4eee-9927-11fd32b9cd89-kube-api-access-85dsd\") pod \"placement-db-sync-fgqxw\" (UID: \"7d968db2-d49e-4eee-9927-11fd32b9cd89\") " pod="openstack/placement-db-sync-fgqxw" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.857579 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d968db2-d49e-4eee-9927-11fd32b9cd89-logs\") pod \"placement-db-sync-fgqxw\" (UID: \"7d968db2-d49e-4eee-9927-11fd32b9cd89\") " pod="openstack/placement-db-sync-fgqxw" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.857610 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d968db2-d49e-4eee-9927-11fd32b9cd89-scripts\") pod \"placement-db-sync-fgqxw\" (UID: \"7d968db2-d49e-4eee-9927-11fd32b9cd89\") " pod="openstack/placement-db-sync-fgqxw" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.857636 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6316dc0c-436a-4cb3-86ae-0d073a62980e-config-data\") pod \"horizon-6f49767877-s5fb5\" (UID: \"6316dc0c-436a-4cb3-86ae-0d073a62980e\") " pod="openstack/horizon-6f49767877-s5fb5" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.857662 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6316dc0c-436a-4cb3-86ae-0d073a62980e-scripts\") pod \"horizon-6f49767877-s5fb5\" (UID: \"6316dc0c-436a-4cb3-86ae-0d073a62980e\") " pod="openstack/horizon-6f49767877-s5fb5" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.857692 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d968db2-d49e-4eee-9927-11fd32b9cd89-config-data\") pod \"placement-db-sync-fgqxw\" (UID: \"7d968db2-d49e-4eee-9927-11fd32b9cd89\") " pod="openstack/placement-db-sync-fgqxw" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.857711 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brqf2\" (UniqueName: \"kubernetes.io/projected/6316dc0c-436a-4cb3-86ae-0d073a62980e-kube-api-access-brqf2\") pod \"horizon-6f49767877-s5fb5\" (UID: \"6316dc0c-436a-4cb3-86ae-0d073a62980e\") " pod="openstack/horizon-6f49767877-s5fb5" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.857730 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6316dc0c-436a-4cb3-86ae-0d073a62980e-horizon-secret-key\") pod \"horizon-6f49767877-s5fb5\" (UID: \"6316dc0c-436a-4cb3-86ae-0d073a62980e\") " pod="openstack/horizon-6f49767877-s5fb5" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.857750 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d968db2-d49e-4eee-9927-11fd32b9cd89-combined-ca-bundle\") pod \"placement-db-sync-fgqxw\" (UID: \"7d968db2-d49e-4eee-9927-11fd32b9cd89\") " pod="openstack/placement-db-sync-fgqxw" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.858142 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6316dc0c-436a-4cb3-86ae-0d073a62980e-logs\") pod \"horizon-6f49767877-s5fb5\" (UID: \"6316dc0c-436a-4cb3-86ae-0d073a62980e\") " pod="openstack/horizon-6f49767877-s5fb5" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.878582 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6316dc0c-436a-4cb3-86ae-0d073a62980e-scripts\") pod \"horizon-6f49767877-s5fb5\" (UID: \"6316dc0c-436a-4cb3-86ae-0d073a62980e\") " pod="openstack/horizon-6f49767877-s5fb5" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.878614 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6316dc0c-436a-4cb3-86ae-0d073a62980e-config-data\") pod \"horizon-6f49767877-s5fb5\" (UID: \"6316dc0c-436a-4cb3-86ae-0d073a62980e\") " pod="openstack/horizon-6f49767877-s5fb5" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.878622 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.907093 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6316dc0c-436a-4cb3-86ae-0d073a62980e-horizon-secret-key\") pod \"horizon-6f49767877-s5fb5\" (UID: \"6316dc0c-436a-4cb3-86ae-0d073a62980e\") " pod="openstack/horizon-6f49767877-s5fb5" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.913361 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brqf2\" (UniqueName: \"kubernetes.io/projected/6316dc0c-436a-4cb3-86ae-0d073a62980e-kube-api-access-brqf2\") pod \"horizon-6f49767877-s5fb5\" (UID: \"6316dc0c-436a-4cb3-86ae-0d073a62980e\") " pod="openstack/horizon-6f49767877-s5fb5" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.917228 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-n8mrx"] Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.961586 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp444\" (UniqueName: \"kubernetes.io/projected/80a33093-d85c-4406-9308-38a8b757040b-kube-api-access-dp444\") pod \"80a33093-d85c-4406-9308-38a8b757040b\" (UID: \"80a33093-d85c-4406-9308-38a8b757040b\") " Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.962126 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjzp4\" (UniqueName: \"kubernetes.io/projected/e18d471e-098d-4024-9e83-ed212464dad9-kube-api-access-tjzp4\") pod \"e18d471e-098d-4024-9e83-ed212464dad9\" (UID: \"e18d471e-098d-4024-9e83-ed212464dad9\") " Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.970245 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d968db2-d49e-4eee-9927-11fd32b9cd89-scripts\") pod \"placement-db-sync-fgqxw\" (UID: \"7d968db2-d49e-4eee-9927-11fd32b9cd89\") " pod="openstack/placement-db-sync-fgqxw" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.970482 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-config-data\") pod \"glance-default-external-api-0\" (UID: \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.970559 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.970602 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.970646 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d968db2-d49e-4eee-9927-11fd32b9cd89-config-data\") pod \"placement-db-sync-fgqxw\" (UID: \"7d968db2-d49e-4eee-9927-11fd32b9cd89\") " pod="openstack/placement-db-sync-fgqxw" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.970740 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-logs\") pod \"glance-default-external-api-0\" (UID: \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.970813 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d968db2-d49e-4eee-9927-11fd32b9cd89-combined-ca-bundle\") pod \"placement-db-sync-fgqxw\" (UID: \"7d968db2-d49e-4eee-9927-11fd32b9cd89\") " pod="openstack/placement-db-sync-fgqxw" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.970895 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.970950 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-scripts\") pod \"glance-default-external-api-0\" (UID: \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.971231 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-n8mrx\" (UID: \"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3\") " pod="openstack/dnsmasq-dns-8b5c85b87-n8mrx" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.971320 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85dsd\" (UniqueName: \"kubernetes.io/projected/7d968db2-d49e-4eee-9927-11fd32b9cd89-kube-api-access-85dsd\") pod \"placement-db-sync-fgqxw\" (UID: \"7d968db2-d49e-4eee-9927-11fd32b9cd89\") " pod="openstack/placement-db-sync-fgqxw" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.971372 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-n8mrx\" (UID: \"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3\") " pod="openstack/dnsmasq-dns-8b5c85b87-n8mrx" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.966398 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80a33093-d85c-4406-9308-38a8b757040b-kube-api-access-dp444" (OuterVolumeSpecName: "kube-api-access-dp444") pod "80a33093-d85c-4406-9308-38a8b757040b" (UID: "80a33093-d85c-4406-9308-38a8b757040b"). InnerVolumeSpecName "kube-api-access-dp444". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.968477 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e18d471e-098d-4024-9e83-ed212464dad9-kube-api-access-tjzp4" (OuterVolumeSpecName: "kube-api-access-tjzp4") pod "e18d471e-098d-4024-9e83-ed212464dad9" (UID: "e18d471e-098d-4024-9e83-ed212464dad9"). InnerVolumeSpecName "kube-api-access-tjzp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.976644 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-n8mrx\" (UID: \"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3\") " pod="openstack/dnsmasq-dns-8b5c85b87-n8mrx" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.976815 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr2j8\" (UniqueName: \"kubernetes.io/projected/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-kube-api-access-hr2j8\") pod \"glance-default-external-api-0\" (UID: \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.976842 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-config\") pod \"dnsmasq-dns-8b5c85b87-n8mrx\" (UID: \"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3\") " pod="openstack/dnsmasq-dns-8b5c85b87-n8mrx" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.976869 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d968db2-d49e-4eee-9927-11fd32b9cd89-logs\") pod \"placement-db-sync-fgqxw\" (UID: \"7d968db2-d49e-4eee-9927-11fd32b9cd89\") " pod="openstack/placement-db-sync-fgqxw" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.976893 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-n8mrx\" (UID: \"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3\") " pod="openstack/dnsmasq-dns-8b5c85b87-n8mrx" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.976928 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsxfr\" (UniqueName: \"kubernetes.io/projected/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-kube-api-access-xsxfr\") pod \"dnsmasq-dns-8b5c85b87-n8mrx\" (UID: \"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3\") " pod="openstack/dnsmasq-dns-8b5c85b87-n8mrx" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.977028 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dp444\" (UniqueName: \"kubernetes.io/projected/80a33093-d85c-4406-9308-38a8b757040b-kube-api-access-dp444\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.977045 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjzp4\" (UniqueName: \"kubernetes.io/projected/e18d471e-098d-4024-9e83-ed212464dad9-kube-api-access-tjzp4\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.977413 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d968db2-d49e-4eee-9927-11fd32b9cd89-logs\") pod \"placement-db-sync-fgqxw\" (UID: \"7d968db2-d49e-4eee-9927-11fd32b9cd89\") " pod="openstack/placement-db-sync-fgqxw" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.980716 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d968db2-d49e-4eee-9927-11fd32b9cd89-config-data\") pod \"placement-db-sync-fgqxw\" (UID: \"7d968db2-d49e-4eee-9927-11fd32b9cd89\") " pod="openstack/placement-db-sync-fgqxw" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.981855 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d968db2-d49e-4eee-9927-11fd32b9cd89-scripts\") pod \"placement-db-sync-fgqxw\" (UID: \"7d968db2-d49e-4eee-9927-11fd32b9cd89\") " pod="openstack/placement-db-sync-fgqxw" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:08.982914 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d968db2-d49e-4eee-9927-11fd32b9cd89-combined-ca-bundle\") pod \"placement-db-sync-fgqxw\" (UID: \"7d968db2-d49e-4eee-9927-11fd32b9cd89\") " pod="openstack/placement-db-sync-fgqxw" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.003846 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85dsd\" (UniqueName: \"kubernetes.io/projected/7d968db2-d49e-4eee-9927-11fd32b9cd89-kube-api-access-85dsd\") pod \"placement-db-sync-fgqxw\" (UID: \"7d968db2-d49e-4eee-9927-11fd32b9cd89\") " pod="openstack/placement-db-sync-fgqxw" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.024173 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-577tj" event={"ID":"2618d426-3d62-4121-af03-d1ed1c23bf6e","Type":"ContainerStarted","Data":"f07c799f165b332262c191a63418d61f1d068585f8f68a1d088786b7cec6b0b8"} Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.024331 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7ff5475cc9-577tj" podUID="2618d426-3d62-4121-af03-d1ed1c23bf6e" containerName="dnsmasq-dns" containerID="cri-o://f07c799f165b332262c191a63418d61f1d068585f8f68a1d088786b7cec6b0b8" gracePeriod=10 Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.024637 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7ff5475cc9-577tj" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.025970 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d39e-account-create-r5sdl" event={"ID":"80a33093-d85c-4406-9308-38a8b757040b","Type":"ContainerDied","Data":"771f25c4eb5c11634187e382eddf3ff455e9a04b2c7eb45376d551344d0741ba"} Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.025987 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="771f25c4eb5c11634187e382eddf3ff455e9a04b2c7eb45376d551344d0741ba" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.026025 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d39e-account-create-r5sdl" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.035302 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5a30-account-create-r6ccp" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.035528 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5a30-account-create-r6ccp" event={"ID":"e18d471e-098d-4024-9e83-ed212464dad9","Type":"ContainerDied","Data":"3e240b2023417277f8b8333bc2db7d8547058056b106fb2f20d5f2794b304fa3"} Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.035572 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e240b2023417277f8b8333bc2db7d8547058056b106fb2f20d5f2794b304fa3" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.052764 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7ff5475cc9-577tj" podStartSLOduration=3.052738271 podStartE2EDuration="3.052738271s" podCreationTimestamp="2025-10-14 13:32:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:32:09.04567696 +0000 UTC m=+1045.894111769" watchObservedRunningTime="2025-10-14 13:32:09.052738271 +0000 UTC m=+1045.901173090" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.079430 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.079559 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-scripts\") pod \"glance-default-external-api-0\" (UID: \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.079616 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-n8mrx\" (UID: \"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3\") " pod="openstack/dnsmasq-dns-8b5c85b87-n8mrx" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.079695 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-n8mrx\" (UID: \"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3\") " pod="openstack/dnsmasq-dns-8b5c85b87-n8mrx" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.079727 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-n8mrx\" (UID: \"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3\") " pod="openstack/dnsmasq-dns-8b5c85b87-n8mrx" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.079758 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr2j8\" (UniqueName: \"kubernetes.io/projected/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-kube-api-access-hr2j8\") pod \"glance-default-external-api-0\" (UID: \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.079778 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-config\") pod \"dnsmasq-dns-8b5c85b87-n8mrx\" (UID: \"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3\") " pod="openstack/dnsmasq-dns-8b5c85b87-n8mrx" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.079799 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-n8mrx\" (UID: \"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3\") " pod="openstack/dnsmasq-dns-8b5c85b87-n8mrx" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.079824 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsxfr\" (UniqueName: \"kubernetes.io/projected/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-kube-api-access-xsxfr\") pod \"dnsmasq-dns-8b5c85b87-n8mrx\" (UID: \"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3\") " pod="openstack/dnsmasq-dns-8b5c85b87-n8mrx" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.079892 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-config-data\") pod \"glance-default-external-api-0\" (UID: \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.079924 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.079945 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.080006 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-logs\") pod \"glance-default-external-api-0\" (UID: \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.080944 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.082179 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-n8mrx\" (UID: \"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3\") " pod="openstack/dnsmasq-dns-8b5c85b87-n8mrx" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.083503 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-n8mrx\" (UID: \"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3\") " pod="openstack/dnsmasq-dns-8b5c85b87-n8mrx" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.083970 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-logs\") pod \"glance-default-external-api-0\" (UID: \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.084109 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-n8mrx\" (UID: \"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3\") " pod="openstack/dnsmasq-dns-8b5c85b87-n8mrx" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.084271 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-config\") pod \"dnsmasq-dns-8b5c85b87-n8mrx\" (UID: \"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3\") " pod="openstack/dnsmasq-dns-8b5c85b87-n8mrx" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.084390 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-n8mrx\" (UID: \"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3\") " pod="openstack/dnsmasq-dns-8b5c85b87-n8mrx" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.086182 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.087618 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.090068 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-config-data\") pod \"glance-default-external-api-0\" (UID: \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.103014 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-scripts\") pod \"glance-default-external-api-0\" (UID: \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.110636 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsxfr\" (UniqueName: \"kubernetes.io/projected/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-kube-api-access-xsxfr\") pod \"dnsmasq-dns-8b5c85b87-n8mrx\" (UID: \"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3\") " pod="openstack/dnsmasq-dns-8b5c85b87-n8mrx" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.112975 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.116387 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr2j8\" (UniqueName: \"kubernetes.io/projected/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-kube-api-access-hr2j8\") pod \"glance-default-external-api-0\" (UID: \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.133153 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f49767877-s5fb5" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.221056 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-fgqxw" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.252744 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.262215 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-n8mrx" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.445570 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.447715 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.450239 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.467405 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.488012 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.490722 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.494045 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.494285 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.510873 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.593192 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq7z4\" (UniqueName: \"kubernetes.io/projected/4bd1dc15-7e73-480b-9599-123a18602d5e-kube-api-access-mq7z4\") pod \"ceilometer-0\" (UID: \"4bd1dc15-7e73-480b-9599-123a18602d5e\") " pod="openstack/ceilometer-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.593277 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"2aa09706-8333-48d8-8bde-64b4ccba5d09\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.593524 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bd1dc15-7e73-480b-9599-123a18602d5e-run-httpd\") pod \"ceilometer-0\" (UID: \"4bd1dc15-7e73-480b-9599-123a18602d5e\") " pod="openstack/ceilometer-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.593757 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bd1dc15-7e73-480b-9599-123a18602d5e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4bd1dc15-7e73-480b-9599-123a18602d5e\") " pod="openstack/ceilometer-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.593834 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aa09706-8333-48d8-8bde-64b4ccba5d09-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2aa09706-8333-48d8-8bde-64b4ccba5d09\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.594217 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bd1dc15-7e73-480b-9599-123a18602d5e-log-httpd\") pod \"ceilometer-0\" (UID: \"4bd1dc15-7e73-480b-9599-123a18602d5e\") " pod="openstack/ceilometer-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.594340 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bd1dc15-7e73-480b-9599-123a18602d5e-scripts\") pod \"ceilometer-0\" (UID: \"4bd1dc15-7e73-480b-9599-123a18602d5e\") " pod="openstack/ceilometer-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.594405 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aa09706-8333-48d8-8bde-64b4ccba5d09-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2aa09706-8333-48d8-8bde-64b4ccba5d09\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.594672 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bd1dc15-7e73-480b-9599-123a18602d5e-config-data\") pod \"ceilometer-0\" (UID: \"4bd1dc15-7e73-480b-9599-123a18602d5e\") " pod="openstack/ceilometer-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.594794 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b9jj\" (UniqueName: \"kubernetes.io/projected/2aa09706-8333-48d8-8bde-64b4ccba5d09-kube-api-access-6b9jj\") pod \"glance-default-internal-api-0\" (UID: \"2aa09706-8333-48d8-8bde-64b4ccba5d09\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.594871 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2aa09706-8333-48d8-8bde-64b4ccba5d09-logs\") pod \"glance-default-internal-api-0\" (UID: \"2aa09706-8333-48d8-8bde-64b4ccba5d09\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.594974 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bd1dc15-7e73-480b-9599-123a18602d5e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4bd1dc15-7e73-480b-9599-123a18602d5e\") " pod="openstack/ceilometer-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.595061 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2aa09706-8333-48d8-8bde-64b4ccba5d09-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2aa09706-8333-48d8-8bde-64b4ccba5d09\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.595139 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa09706-8333-48d8-8bde-64b4ccba5d09-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2aa09706-8333-48d8-8bde-64b4ccba5d09\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.696430 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bd1dc15-7e73-480b-9599-123a18602d5e-log-httpd\") pod \"ceilometer-0\" (UID: \"4bd1dc15-7e73-480b-9599-123a18602d5e\") " pod="openstack/ceilometer-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.696504 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bd1dc15-7e73-480b-9599-123a18602d5e-scripts\") pod \"ceilometer-0\" (UID: \"4bd1dc15-7e73-480b-9599-123a18602d5e\") " pod="openstack/ceilometer-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.696528 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aa09706-8333-48d8-8bde-64b4ccba5d09-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2aa09706-8333-48d8-8bde-64b4ccba5d09\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.696590 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bd1dc15-7e73-480b-9599-123a18602d5e-config-data\") pod \"ceilometer-0\" (UID: \"4bd1dc15-7e73-480b-9599-123a18602d5e\") " pod="openstack/ceilometer-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.696640 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b9jj\" (UniqueName: \"kubernetes.io/projected/2aa09706-8333-48d8-8bde-64b4ccba5d09-kube-api-access-6b9jj\") pod \"glance-default-internal-api-0\" (UID: \"2aa09706-8333-48d8-8bde-64b4ccba5d09\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.696656 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2aa09706-8333-48d8-8bde-64b4ccba5d09-logs\") pod \"glance-default-internal-api-0\" (UID: \"2aa09706-8333-48d8-8bde-64b4ccba5d09\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.696678 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bd1dc15-7e73-480b-9599-123a18602d5e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4bd1dc15-7e73-480b-9599-123a18602d5e\") " pod="openstack/ceilometer-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.696699 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2aa09706-8333-48d8-8bde-64b4ccba5d09-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2aa09706-8333-48d8-8bde-64b4ccba5d09\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.696719 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa09706-8333-48d8-8bde-64b4ccba5d09-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2aa09706-8333-48d8-8bde-64b4ccba5d09\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.696741 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq7z4\" (UniqueName: \"kubernetes.io/projected/4bd1dc15-7e73-480b-9599-123a18602d5e-kube-api-access-mq7z4\") pod \"ceilometer-0\" (UID: \"4bd1dc15-7e73-480b-9599-123a18602d5e\") " pod="openstack/ceilometer-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.696764 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"2aa09706-8333-48d8-8bde-64b4ccba5d09\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.696781 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bd1dc15-7e73-480b-9599-123a18602d5e-run-httpd\") pod \"ceilometer-0\" (UID: \"4bd1dc15-7e73-480b-9599-123a18602d5e\") " pod="openstack/ceilometer-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.696814 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bd1dc15-7e73-480b-9599-123a18602d5e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4bd1dc15-7e73-480b-9599-123a18602d5e\") " pod="openstack/ceilometer-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.696833 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aa09706-8333-48d8-8bde-64b4ccba5d09-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2aa09706-8333-48d8-8bde-64b4ccba5d09\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.697142 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bd1dc15-7e73-480b-9599-123a18602d5e-log-httpd\") pod \"ceilometer-0\" (UID: \"4bd1dc15-7e73-480b-9599-123a18602d5e\") " pod="openstack/ceilometer-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.697707 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"2aa09706-8333-48d8-8bde-64b4ccba5d09\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.699047 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2aa09706-8333-48d8-8bde-64b4ccba5d09-logs\") pod \"glance-default-internal-api-0\" (UID: \"2aa09706-8333-48d8-8bde-64b4ccba5d09\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.699994 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bd1dc15-7e73-480b-9599-123a18602d5e-run-httpd\") pod \"ceilometer-0\" (UID: \"4bd1dc15-7e73-480b-9599-123a18602d5e\") " pod="openstack/ceilometer-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.707163 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bd1dc15-7e73-480b-9599-123a18602d5e-scripts\") pod \"ceilometer-0\" (UID: \"4bd1dc15-7e73-480b-9599-123a18602d5e\") " pod="openstack/ceilometer-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.707749 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aa09706-8333-48d8-8bde-64b4ccba5d09-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2aa09706-8333-48d8-8bde-64b4ccba5d09\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.714051 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bd1dc15-7e73-480b-9599-123a18602d5e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4bd1dc15-7e73-480b-9599-123a18602d5e\") " pod="openstack/ceilometer-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.715698 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2aa09706-8333-48d8-8bde-64b4ccba5d09-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2aa09706-8333-48d8-8bde-64b4ccba5d09\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.716557 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa09706-8333-48d8-8bde-64b4ccba5d09-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2aa09706-8333-48d8-8bde-64b4ccba5d09\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.723272 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bd1dc15-7e73-480b-9599-123a18602d5e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4bd1dc15-7e73-480b-9599-123a18602d5e\") " pod="openstack/ceilometer-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.724267 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bd1dc15-7e73-480b-9599-123a18602d5e-config-data\") pod \"ceilometer-0\" (UID: \"4bd1dc15-7e73-480b-9599-123a18602d5e\") " pod="openstack/ceilometer-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.731900 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b9jj\" (UniqueName: \"kubernetes.io/projected/2aa09706-8333-48d8-8bde-64b4ccba5d09-kube-api-access-6b9jj\") pod \"glance-default-internal-api-0\" (UID: \"2aa09706-8333-48d8-8bde-64b4ccba5d09\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.736076 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq7z4\" (UniqueName: \"kubernetes.io/projected/4bd1dc15-7e73-480b-9599-123a18602d5e-kube-api-access-mq7z4\") pod \"ceilometer-0\" (UID: \"4bd1dc15-7e73-480b-9599-123a18602d5e\") " pod="openstack/ceilometer-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.736686 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aa09706-8333-48d8-8bde-64b4ccba5d09-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2aa09706-8333-48d8-8bde-64b4ccba5d09\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.759695 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"2aa09706-8333-48d8-8bde-64b4ccba5d09\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.794916 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.814941 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.924704 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c12d-account-create-m5xhh" Oct 14 13:32:09 crc kubenswrapper[4725]: I1014 13:32:09.947382 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-577tj" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.000673 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2618d426-3d62-4121-af03-d1ed1c23bf6e-config\") pod \"2618d426-3d62-4121-af03-d1ed1c23bf6e\" (UID: \"2618d426-3d62-4121-af03-d1ed1c23bf6e\") " Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.000732 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfdwr\" (UniqueName: \"kubernetes.io/projected/553743dd-4497-4092-8723-3c03580db239-kube-api-access-kfdwr\") pod \"553743dd-4497-4092-8723-3c03580db239\" (UID: \"553743dd-4497-4092-8723-3c03580db239\") " Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.000767 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2618d426-3d62-4121-af03-d1ed1c23bf6e-ovsdbserver-sb\") pod \"2618d426-3d62-4121-af03-d1ed1c23bf6e\" (UID: \"2618d426-3d62-4121-af03-d1ed1c23bf6e\") " Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.000906 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2618d426-3d62-4121-af03-d1ed1c23bf6e-dns-svc\") pod \"2618d426-3d62-4121-af03-d1ed1c23bf6e\" (UID: \"2618d426-3d62-4121-af03-d1ed1c23bf6e\") " Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.000976 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtldc\" (UniqueName: \"kubernetes.io/projected/2618d426-3d62-4121-af03-d1ed1c23bf6e-kube-api-access-rtldc\") pod \"2618d426-3d62-4121-af03-d1ed1c23bf6e\" (UID: \"2618d426-3d62-4121-af03-d1ed1c23bf6e\") " Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.001011 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2618d426-3d62-4121-af03-d1ed1c23bf6e-ovsdbserver-nb\") pod \"2618d426-3d62-4121-af03-d1ed1c23bf6e\" (UID: \"2618d426-3d62-4121-af03-d1ed1c23bf6e\") " Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.001042 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2618d426-3d62-4121-af03-d1ed1c23bf6e-dns-swift-storage-0\") pod \"2618d426-3d62-4121-af03-d1ed1c23bf6e\" (UID: \"2618d426-3d62-4121-af03-d1ed1c23bf6e\") " Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.008859 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2618d426-3d62-4121-af03-d1ed1c23bf6e-kube-api-access-rtldc" (OuterVolumeSpecName: "kube-api-access-rtldc") pod "2618d426-3d62-4121-af03-d1ed1c23bf6e" (UID: "2618d426-3d62-4121-af03-d1ed1c23bf6e"). InnerVolumeSpecName "kube-api-access-rtldc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.008925 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/553743dd-4497-4092-8723-3c03580db239-kube-api-access-kfdwr" (OuterVolumeSpecName: "kube-api-access-kfdwr") pod "553743dd-4497-4092-8723-3c03580db239" (UID: "553743dd-4497-4092-8723-3c03580db239"). InnerVolumeSpecName "kube-api-access-kfdwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.050732 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-577tj" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.050860 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-577tj" event={"ID":"2618d426-3d62-4121-af03-d1ed1c23bf6e","Type":"ContainerDied","Data":"f07c799f165b332262c191a63418d61f1d068585f8f68a1d088786b7cec6b0b8"} Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.051221 4725 scope.go:117] "RemoveContainer" containerID="f07c799f165b332262c191a63418d61f1d068585f8f68a1d088786b7cec6b0b8" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.051776 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2618d426-3d62-4121-af03-d1ed1c23bf6e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2618d426-3d62-4121-af03-d1ed1c23bf6e" (UID: "2618d426-3d62-4121-af03-d1ed1c23bf6e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.052306 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2618d426-3d62-4121-af03-d1ed1c23bf6e-config" (OuterVolumeSpecName: "config") pod "2618d426-3d62-4121-af03-d1ed1c23bf6e" (UID: "2618d426-3d62-4121-af03-d1ed1c23bf6e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.050526 4725 generic.go:334] "Generic (PLEG): container finished" podID="2618d426-3d62-4121-af03-d1ed1c23bf6e" containerID="f07c799f165b332262c191a63418d61f1d068585f8f68a1d088786b7cec6b0b8" exitCode=0 Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.053031 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-577tj" event={"ID":"2618d426-3d62-4121-af03-d1ed1c23bf6e","Type":"ContainerDied","Data":"3854f44335e99cdd7598a19778304fb1ad8d42c8339f16ea9d46f671501b07be"} Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.061073 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c12d-account-create-m5xhh" event={"ID":"553743dd-4497-4092-8723-3c03580db239","Type":"ContainerDied","Data":"152415dc925dce05ee304cf4c407c66e1b2d02493f4d29d0671bc623d608756b"} Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.061116 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="152415dc925dce05ee304cf4c407c66e1b2d02493f4d29d0671bc623d608756b" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.061116 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c12d-account-create-m5xhh" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.080276 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2618d426-3d62-4121-af03-d1ed1c23bf6e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2618d426-3d62-4121-af03-d1ed1c23bf6e" (UID: "2618d426-3d62-4121-af03-d1ed1c23bf6e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.080681 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2618d426-3d62-4121-af03-d1ed1c23bf6e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2618d426-3d62-4121-af03-d1ed1c23bf6e" (UID: "2618d426-3d62-4121-af03-d1ed1c23bf6e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.085407 4725 scope.go:117] "RemoveContainer" containerID="3252e9437e0e012b8f59c7dd93ed1b367aad422eb8cf0d2e6e8dcd109b1fea08" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.088869 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-rrb4t"] Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.100008 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2618d426-3d62-4121-af03-d1ed1c23bf6e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2618d426-3d62-4121-af03-d1ed1c23bf6e" (UID: "2618d426-3d62-4121-af03-d1ed1c23bf6e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.103406 4725 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2618d426-3d62-4121-af03-d1ed1c23bf6e-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.103428 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtldc\" (UniqueName: \"kubernetes.io/projected/2618d426-3d62-4121-af03-d1ed1c23bf6e-kube-api-access-rtldc\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.103440 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2618d426-3d62-4121-af03-d1ed1c23bf6e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.103533 4725 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2618d426-3d62-4121-af03-d1ed1c23bf6e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.103562 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2618d426-3d62-4121-af03-d1ed1c23bf6e-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.103571 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfdwr\" (UniqueName: \"kubernetes.io/projected/553743dd-4497-4092-8723-3c03580db239-kube-api-access-kfdwr\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.103579 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2618d426-3d62-4121-af03-d1ed1c23bf6e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.106649 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-684dcc5995-mbkf4"] Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.125117 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2sx29"] Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.125435 4725 scope.go:117] "RemoveContainer" containerID="f07c799f165b332262c191a63418d61f1d068585f8f68a1d088786b7cec6b0b8" Oct 14 13:32:10 crc kubenswrapper[4725]: E1014 13:32:10.126419 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f07c799f165b332262c191a63418d61f1d068585f8f68a1d088786b7cec6b0b8\": container with ID starting with f07c799f165b332262c191a63418d61f1d068585f8f68a1d088786b7cec6b0b8 not found: ID does not exist" containerID="f07c799f165b332262c191a63418d61f1d068585f8f68a1d088786b7cec6b0b8" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.126470 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f07c799f165b332262c191a63418d61f1d068585f8f68a1d088786b7cec6b0b8"} err="failed to get container status \"f07c799f165b332262c191a63418d61f1d068585f8f68a1d088786b7cec6b0b8\": rpc error: code = NotFound desc = could not find container \"f07c799f165b332262c191a63418d61f1d068585f8f68a1d088786b7cec6b0b8\": container with ID starting with f07c799f165b332262c191a63418d61f1d068585f8f68a1d088786b7cec6b0b8 not found: ID does not exist" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.126497 4725 scope.go:117] "RemoveContainer" containerID="3252e9437e0e012b8f59c7dd93ed1b367aad422eb8cf0d2e6e8dcd109b1fea08" Oct 14 13:32:10 crc kubenswrapper[4725]: W1014 13:32:10.135752 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6316dc0c_436a_4cb3_86ae_0d073a62980e.slice/crio-f353098c44c9ac2b7e2e8c0e4304c550fae7b297bc43ed599cf9a0b4d792aeb4 WatchSource:0}: Error finding container f353098c44c9ac2b7e2e8c0e4304c550fae7b297bc43ed599cf9a0b4d792aeb4: Status 404 returned error can't find the container with id f353098c44c9ac2b7e2e8c0e4304c550fae7b297bc43ed599cf9a0b4d792aeb4 Oct 14 13:32:10 crc kubenswrapper[4725]: E1014 13:32:10.135991 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3252e9437e0e012b8f59c7dd93ed1b367aad422eb8cf0d2e6e8dcd109b1fea08\": container with ID starting with 3252e9437e0e012b8f59c7dd93ed1b367aad422eb8cf0d2e6e8dcd109b1fea08 not found: ID does not exist" containerID="3252e9437e0e012b8f59c7dd93ed1b367aad422eb8cf0d2e6e8dcd109b1fea08" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.136025 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3252e9437e0e012b8f59c7dd93ed1b367aad422eb8cf0d2e6e8dcd109b1fea08"} err="failed to get container status \"3252e9437e0e012b8f59c7dd93ed1b367aad422eb8cf0d2e6e8dcd109b1fea08\": rpc error: code = NotFound desc = could not find container \"3252e9437e0e012b8f59c7dd93ed1b367aad422eb8cf0d2e6e8dcd109b1fea08\": container with ID starting with 3252e9437e0e012b8f59c7dd93ed1b367aad422eb8cf0d2e6e8dcd109b1fea08 not found: ID does not exist" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.136806 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f49767877-s5fb5"] Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.354122 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-n8mrx"] Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.404590 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-fgqxw"] Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.464551 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 13:32:10 crc kubenswrapper[4725]: W1014 13:32:10.473160 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fbbbffe_8e57_422a_bc9d_f61b07e7c3a5.slice/crio-4be75e1ba730bd30a84a41c7b23bd16014914c389c8c554106b4267a5e012d0b WatchSource:0}: Error finding container 4be75e1ba730bd30a84a41c7b23bd16014914c389c8c554106b4267a5e012d0b: Status 404 returned error can't find the container with id 4be75e1ba730bd30a84a41c7b23bd16014914c389c8c554106b4267a5e012d0b Oct 14 13:32:10 crc kubenswrapper[4725]: W1014 13:32:10.510278 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bd1dc15_7e73_480b_9599_123a18602d5e.slice/crio-fe920be9a76a49ec86c3a822f2e9ae775b689221f73d2030071aebbecd4a8484 WatchSource:0}: Error finding container fe920be9a76a49ec86c3a822f2e9ae775b689221f73d2030071aebbecd4a8484: Status 404 returned error can't find the container with id fe920be9a76a49ec86c3a822f2e9ae775b689221f73d2030071aebbecd4a8484 Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.514870 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.561564 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.658287 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-577tj"] Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.664747 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-577tj"] Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.965808 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-gzbkx"] Oct 14 13:32:10 crc kubenswrapper[4725]: E1014 13:32:10.966212 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2618d426-3d62-4121-af03-d1ed1c23bf6e" containerName="dnsmasq-dns" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.966236 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="2618d426-3d62-4121-af03-d1ed1c23bf6e" containerName="dnsmasq-dns" Oct 14 13:32:10 crc kubenswrapper[4725]: E1014 13:32:10.966269 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2618d426-3d62-4121-af03-d1ed1c23bf6e" containerName="init" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.966277 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="2618d426-3d62-4121-af03-d1ed1c23bf6e" containerName="init" Oct 14 13:32:10 crc kubenswrapper[4725]: E1014 13:32:10.966294 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="553743dd-4497-4092-8723-3c03580db239" containerName="mariadb-account-create" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.966301 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="553743dd-4497-4092-8723-3c03580db239" containerName="mariadb-account-create" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.966488 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="553743dd-4497-4092-8723-3c03580db239" containerName="mariadb-account-create" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.966513 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="2618d426-3d62-4121-af03-d1ed1c23bf6e" containerName="dnsmasq-dns" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.967125 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-gzbkx" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.969886 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.970100 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-62bg8" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.970279 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 14 13:32:10 crc kubenswrapper[4725]: I1014 13:32:10.977688 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-gzbkx"] Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.020213 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ec415043-bd33-4ab3-8437-28eda0458656-db-sync-config-data\") pod \"cinder-db-sync-gzbkx\" (UID: \"ec415043-bd33-4ab3-8437-28eda0458656\") " pod="openstack/cinder-db-sync-gzbkx" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.020334 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec415043-bd33-4ab3-8437-28eda0458656-scripts\") pod \"cinder-db-sync-gzbkx\" (UID: \"ec415043-bd33-4ab3-8437-28eda0458656\") " pod="openstack/cinder-db-sync-gzbkx" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.020383 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhptj\" (UniqueName: \"kubernetes.io/projected/ec415043-bd33-4ab3-8437-28eda0458656-kube-api-access-lhptj\") pod \"cinder-db-sync-gzbkx\" (UID: \"ec415043-bd33-4ab3-8437-28eda0458656\") " pod="openstack/cinder-db-sync-gzbkx" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.020404 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec415043-bd33-4ab3-8437-28eda0458656-etc-machine-id\") pod \"cinder-db-sync-gzbkx\" (UID: \"ec415043-bd33-4ab3-8437-28eda0458656\") " pod="openstack/cinder-db-sync-gzbkx" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.020425 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec415043-bd33-4ab3-8437-28eda0458656-config-data\") pod \"cinder-db-sync-gzbkx\" (UID: \"ec415043-bd33-4ab3-8437-28eda0458656\") " pod="openstack/cinder-db-sync-gzbkx" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.020441 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec415043-bd33-4ab3-8437-28eda0458656-combined-ca-bundle\") pod \"cinder-db-sync-gzbkx\" (UID: \"ec415043-bd33-4ab3-8437-28eda0458656\") " pod="openstack/cinder-db-sync-gzbkx" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.073485 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2aa09706-8333-48d8-8bde-64b4ccba5d09","Type":"ContainerStarted","Data":"3d4f1ea1e8562b7f1b169c0cfa0f4cdb69e963425344250140fdb6f5bbcd98c4"} Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.075299 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5","Type":"ContainerStarted","Data":"4be75e1ba730bd30a84a41c7b23bd16014914c389c8c554106b4267a5e012d0b"} Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.077254 4725 generic.go:334] "Generic (PLEG): container finished" podID="e60f2f91-ff70-4ffb-86a1-653403235ef3" containerID="1fb24b6b867500e92320a06f9243a28c8f3a26173c361414e34c0f94c6976f65" exitCode=0 Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.077322 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-rrb4t" event={"ID":"e60f2f91-ff70-4ffb-86a1-653403235ef3","Type":"ContainerDied","Data":"1fb24b6b867500e92320a06f9243a28c8f3a26173c361414e34c0f94c6976f65"} Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.077349 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-rrb4t" event={"ID":"e60f2f91-ff70-4ffb-86a1-653403235ef3","Type":"ContainerStarted","Data":"61620bf739b1b195ea1b9b4787902e5cfa443a1b02d9c2fbfd0299d5e000a6d9"} Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.080110 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-fgqxw" event={"ID":"7d968db2-d49e-4eee-9927-11fd32b9cd89","Type":"ContainerStarted","Data":"68e80da7fb35c05623b669b14e218ce6bab9d7322d419cb015c7c7e3004db542"} Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.082187 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bd1dc15-7e73-480b-9599-123a18602d5e","Type":"ContainerStarted","Data":"fe920be9a76a49ec86c3a822f2e9ae775b689221f73d2030071aebbecd4a8484"} Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.083701 4725 generic.go:334] "Generic (PLEG): container finished" podID="59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3" containerID="6e8b06a8e0bc982ee9cac96a1a9a8b52f5e3529fb3d744de50abd9baded63ffa" exitCode=0 Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.083804 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-n8mrx" event={"ID":"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3","Type":"ContainerDied","Data":"6e8b06a8e0bc982ee9cac96a1a9a8b52f5e3529fb3d744de50abd9baded63ffa"} Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.084563 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-n8mrx" event={"ID":"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3","Type":"ContainerStarted","Data":"0b1777a95565bb6daf4e1b4dcf10bd1efc79e415420ef2b141d2b2b2f0e4bcc1"} Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.086135 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f49767877-s5fb5" event={"ID":"6316dc0c-436a-4cb3-86ae-0d073a62980e","Type":"ContainerStarted","Data":"f353098c44c9ac2b7e2e8c0e4304c550fae7b297bc43ed599cf9a0b4d792aeb4"} Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.087054 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-684dcc5995-mbkf4" event={"ID":"486ea1cf-7023-407b-bb7f-c67ce34c42aa","Type":"ContainerStarted","Data":"299a5a505d7775e2108d651a5a669524cb64aa3a32315966f119294e786cc33a"} Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.088582 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2sx29" event={"ID":"16644f47-970c-4ed8-b44d-04a8f4765a63","Type":"ContainerStarted","Data":"a5d1b40ce1d3e6b000dc8866622c16730ff977dee3bfbf640f75c48553e2600b"} Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.088612 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2sx29" event={"ID":"16644f47-970c-4ed8-b44d-04a8f4765a63","Type":"ContainerStarted","Data":"07e0f5b82c539566355b10bc94d08d4a2a5a535cd372f08e6a5a24265b8ed692"} Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.129004 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec415043-bd33-4ab3-8437-28eda0458656-config-data\") pod \"cinder-db-sync-gzbkx\" (UID: \"ec415043-bd33-4ab3-8437-28eda0458656\") " pod="openstack/cinder-db-sync-gzbkx" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.136531 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec415043-bd33-4ab3-8437-28eda0458656-combined-ca-bundle\") pod \"cinder-db-sync-gzbkx\" (UID: \"ec415043-bd33-4ab3-8437-28eda0458656\") " pod="openstack/cinder-db-sync-gzbkx" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.136723 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ec415043-bd33-4ab3-8437-28eda0458656-db-sync-config-data\") pod \"cinder-db-sync-gzbkx\" (UID: \"ec415043-bd33-4ab3-8437-28eda0458656\") " pod="openstack/cinder-db-sync-gzbkx" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.137024 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec415043-bd33-4ab3-8437-28eda0458656-scripts\") pod \"cinder-db-sync-gzbkx\" (UID: \"ec415043-bd33-4ab3-8437-28eda0458656\") " pod="openstack/cinder-db-sync-gzbkx" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.137152 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhptj\" (UniqueName: \"kubernetes.io/projected/ec415043-bd33-4ab3-8437-28eda0458656-kube-api-access-lhptj\") pod \"cinder-db-sync-gzbkx\" (UID: \"ec415043-bd33-4ab3-8437-28eda0458656\") " pod="openstack/cinder-db-sync-gzbkx" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.137218 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec415043-bd33-4ab3-8437-28eda0458656-etc-machine-id\") pod \"cinder-db-sync-gzbkx\" (UID: \"ec415043-bd33-4ab3-8437-28eda0458656\") " pod="openstack/cinder-db-sync-gzbkx" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.137311 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec415043-bd33-4ab3-8437-28eda0458656-etc-machine-id\") pod \"cinder-db-sync-gzbkx\" (UID: \"ec415043-bd33-4ab3-8437-28eda0458656\") " pod="openstack/cinder-db-sync-gzbkx" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.137896 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec415043-bd33-4ab3-8437-28eda0458656-combined-ca-bundle\") pod \"cinder-db-sync-gzbkx\" (UID: \"ec415043-bd33-4ab3-8437-28eda0458656\") " pod="openstack/cinder-db-sync-gzbkx" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.146217 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ec415043-bd33-4ab3-8437-28eda0458656-db-sync-config-data\") pod \"cinder-db-sync-gzbkx\" (UID: \"ec415043-bd33-4ab3-8437-28eda0458656\") " pod="openstack/cinder-db-sync-gzbkx" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.148422 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec415043-bd33-4ab3-8437-28eda0458656-config-data\") pod \"cinder-db-sync-gzbkx\" (UID: \"ec415043-bd33-4ab3-8437-28eda0458656\") " pod="openstack/cinder-db-sync-gzbkx" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.150394 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec415043-bd33-4ab3-8437-28eda0458656-scripts\") pod \"cinder-db-sync-gzbkx\" (UID: \"ec415043-bd33-4ab3-8437-28eda0458656\") " pod="openstack/cinder-db-sync-gzbkx" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.161699 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhptj\" (UniqueName: \"kubernetes.io/projected/ec415043-bd33-4ab3-8437-28eda0458656-kube-api-access-lhptj\") pod \"cinder-db-sync-gzbkx\" (UID: \"ec415043-bd33-4ab3-8437-28eda0458656\") " pod="openstack/cinder-db-sync-gzbkx" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.181174 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-2sx29" podStartSLOduration=3.181156471 podStartE2EDuration="3.181156471s" podCreationTimestamp="2025-10-14 13:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:32:11.14581513 +0000 UTC m=+1047.994249939" watchObservedRunningTime="2025-10-14 13:32:11.181156471 +0000 UTC m=+1048.029591280" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.209621 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-rwcgw"] Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.210893 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rwcgw" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.213847 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-d7s2t" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.214089 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.220187 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-rwcgw"] Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.299181 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-gzbkx" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.345839 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f19eb4e-4359-4092-a050-e1d695fbb891-db-sync-config-data\") pod \"barbican-db-sync-rwcgw\" (UID: \"1f19eb4e-4359-4092-a050-e1d695fbb891\") " pod="openstack/barbican-db-sync-rwcgw" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.346052 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f19eb4e-4359-4092-a050-e1d695fbb891-combined-ca-bundle\") pod \"barbican-db-sync-rwcgw\" (UID: \"1f19eb4e-4359-4092-a050-e1d695fbb891\") " pod="openstack/barbican-db-sync-rwcgw" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.346218 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbkkb\" (UniqueName: \"kubernetes.io/projected/1f19eb4e-4359-4092-a050-e1d695fbb891-kube-api-access-xbkkb\") pod \"barbican-db-sync-rwcgw\" (UID: \"1f19eb4e-4359-4092-a050-e1d695fbb891\") " pod="openstack/barbican-db-sync-rwcgw" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.359678 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-5sg8k"] Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.361073 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5sg8k" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.365815 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-6mvtz" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.366053 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.366160 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.374558 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-5sg8k"] Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.447802 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0-combined-ca-bundle\") pod \"neutron-db-sync-5sg8k\" (UID: \"9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0\") " pod="openstack/neutron-db-sync-5sg8k" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.447842 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0-config\") pod \"neutron-db-sync-5sg8k\" (UID: \"9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0\") " pod="openstack/neutron-db-sync-5sg8k" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.447932 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbkkb\" (UniqueName: \"kubernetes.io/projected/1f19eb4e-4359-4092-a050-e1d695fbb891-kube-api-access-xbkkb\") pod \"barbican-db-sync-rwcgw\" (UID: \"1f19eb4e-4359-4092-a050-e1d695fbb891\") " pod="openstack/barbican-db-sync-rwcgw" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.448100 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f19eb4e-4359-4092-a050-e1d695fbb891-db-sync-config-data\") pod \"barbican-db-sync-rwcgw\" (UID: \"1f19eb4e-4359-4092-a050-e1d695fbb891\") " pod="openstack/barbican-db-sync-rwcgw" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.448218 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmhg6\" (UniqueName: \"kubernetes.io/projected/9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0-kube-api-access-hmhg6\") pod \"neutron-db-sync-5sg8k\" (UID: \"9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0\") " pod="openstack/neutron-db-sync-5sg8k" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.448318 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f19eb4e-4359-4092-a050-e1d695fbb891-combined-ca-bundle\") pod \"barbican-db-sync-rwcgw\" (UID: \"1f19eb4e-4359-4092-a050-e1d695fbb891\") " pod="openstack/barbican-db-sync-rwcgw" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.458058 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f19eb4e-4359-4092-a050-e1d695fbb891-db-sync-config-data\") pod \"barbican-db-sync-rwcgw\" (UID: \"1f19eb4e-4359-4092-a050-e1d695fbb891\") " pod="openstack/barbican-db-sync-rwcgw" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.463131 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f19eb4e-4359-4092-a050-e1d695fbb891-combined-ca-bundle\") pod \"barbican-db-sync-rwcgw\" (UID: \"1f19eb4e-4359-4092-a050-e1d695fbb891\") " pod="openstack/barbican-db-sync-rwcgw" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.472965 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbkkb\" (UniqueName: \"kubernetes.io/projected/1f19eb4e-4359-4092-a050-e1d695fbb891-kube-api-access-xbkkb\") pod \"barbican-db-sync-rwcgw\" (UID: \"1f19eb4e-4359-4092-a050-e1d695fbb891\") " pod="openstack/barbican-db-sync-rwcgw" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.538701 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rwcgw" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.552787 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmhg6\" (UniqueName: \"kubernetes.io/projected/9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0-kube-api-access-hmhg6\") pod \"neutron-db-sync-5sg8k\" (UID: \"9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0\") " pod="openstack/neutron-db-sync-5sg8k" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.552946 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0-combined-ca-bundle\") pod \"neutron-db-sync-5sg8k\" (UID: \"9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0\") " pod="openstack/neutron-db-sync-5sg8k" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.552969 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0-config\") pod \"neutron-db-sync-5sg8k\" (UID: \"9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0\") " pod="openstack/neutron-db-sync-5sg8k" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.557685 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0-config\") pod \"neutron-db-sync-5sg8k\" (UID: \"9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0\") " pod="openstack/neutron-db-sync-5sg8k" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.568904 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0-combined-ca-bundle\") pod \"neutron-db-sync-5sg8k\" (UID: \"9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0\") " pod="openstack/neutron-db-sync-5sg8k" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.575058 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-rrb4t" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.596264 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmhg6\" (UniqueName: \"kubernetes.io/projected/9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0-kube-api-access-hmhg6\") pod \"neutron-db-sync-5sg8k\" (UID: \"9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0\") " pod="openstack/neutron-db-sync-5sg8k" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.654478 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e60f2f91-ff70-4ffb-86a1-653403235ef3-ovsdbserver-sb\") pod \"e60f2f91-ff70-4ffb-86a1-653403235ef3\" (UID: \"e60f2f91-ff70-4ffb-86a1-653403235ef3\") " Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.654546 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e60f2f91-ff70-4ffb-86a1-653403235ef3-dns-svc\") pod \"e60f2f91-ff70-4ffb-86a1-653403235ef3\" (UID: \"e60f2f91-ff70-4ffb-86a1-653403235ef3\") " Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.655467 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e60f2f91-ff70-4ffb-86a1-653403235ef3-ovsdbserver-nb\") pod \"e60f2f91-ff70-4ffb-86a1-653403235ef3\" (UID: \"e60f2f91-ff70-4ffb-86a1-653403235ef3\") " Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.655556 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8tk\" (UniqueName: \"kubernetes.io/projected/e60f2f91-ff70-4ffb-86a1-653403235ef3-kube-api-access-ws8tk\") pod \"e60f2f91-ff70-4ffb-86a1-653403235ef3\" (UID: \"e60f2f91-ff70-4ffb-86a1-653403235ef3\") " Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.655607 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e60f2f91-ff70-4ffb-86a1-653403235ef3-dns-swift-storage-0\") pod \"e60f2f91-ff70-4ffb-86a1-653403235ef3\" (UID: \"e60f2f91-ff70-4ffb-86a1-653403235ef3\") " Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.655624 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e60f2f91-ff70-4ffb-86a1-653403235ef3-config\") pod \"e60f2f91-ff70-4ffb-86a1-653403235ef3\" (UID: \"e60f2f91-ff70-4ffb-86a1-653403235ef3\") " Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.660649 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e60f2f91-ff70-4ffb-86a1-653403235ef3-kube-api-access-ws8tk" (OuterVolumeSpecName: "kube-api-access-ws8tk") pod "e60f2f91-ff70-4ffb-86a1-653403235ef3" (UID: "e60f2f91-ff70-4ffb-86a1-653403235ef3"). InnerVolumeSpecName "kube-api-access-ws8tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.678343 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e60f2f91-ff70-4ffb-86a1-653403235ef3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e60f2f91-ff70-4ffb-86a1-653403235ef3" (UID: "e60f2f91-ff70-4ffb-86a1-653403235ef3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.684926 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e60f2f91-ff70-4ffb-86a1-653403235ef3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e60f2f91-ff70-4ffb-86a1-653403235ef3" (UID: "e60f2f91-ff70-4ffb-86a1-653403235ef3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.691771 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e60f2f91-ff70-4ffb-86a1-653403235ef3-config" (OuterVolumeSpecName: "config") pod "e60f2f91-ff70-4ffb-86a1-653403235ef3" (UID: "e60f2f91-ff70-4ffb-86a1-653403235ef3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.703669 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e60f2f91-ff70-4ffb-86a1-653403235ef3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e60f2f91-ff70-4ffb-86a1-653403235ef3" (UID: "e60f2f91-ff70-4ffb-86a1-653403235ef3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.721995 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e60f2f91-ff70-4ffb-86a1-653403235ef3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e60f2f91-ff70-4ffb-86a1-653403235ef3" (UID: "e60f2f91-ff70-4ffb-86a1-653403235ef3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.759057 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e60f2f91-ff70-4ffb-86a1-653403235ef3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.759090 4725 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e60f2f91-ff70-4ffb-86a1-653403235ef3-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.759100 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e60f2f91-ff70-4ffb-86a1-653403235ef3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.759108 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ws8tk\" (UniqueName: \"kubernetes.io/projected/e60f2f91-ff70-4ffb-86a1-653403235ef3-kube-api-access-ws8tk\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.759119 4725 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e60f2f91-ff70-4ffb-86a1-653403235ef3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.759129 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e60f2f91-ff70-4ffb-86a1-653403235ef3-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.836919 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-rwcgw"] Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.871480 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5sg8k" Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.889761 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-gzbkx"] Oct 14 13:32:11 crc kubenswrapper[4725]: W1014 13:32:11.891688 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f19eb4e_4359_4092_a050_e1d695fbb891.slice/crio-7d5530e131f29f44828af0626d0373e717bf75c34fb9b2763a11d8faef9ac46d WatchSource:0}: Error finding container 7d5530e131f29f44828af0626d0373e717bf75c34fb9b2763a11d8faef9ac46d: Status 404 returned error can't find the container with id 7d5530e131f29f44828af0626d0373e717bf75c34fb9b2763a11d8faef9ac46d Oct 14 13:32:11 crc kubenswrapper[4725]: I1014 13:32:11.934126 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2618d426-3d62-4121-af03-d1ed1c23bf6e" path="/var/lib/kubelet/pods/2618d426-3d62-4121-af03-d1ed1c23bf6e/volumes" Oct 14 13:32:11 crc kubenswrapper[4725]: W1014 13:32:11.942755 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec415043_bd33_4ab3_8437_28eda0458656.slice/crio-46ced52c5ea988369e91648d6aa9254aca236f85f7464b50f7141e7c5eab22ed WatchSource:0}: Error finding container 46ced52c5ea988369e91648d6aa9254aca236f85f7464b50f7141e7c5eab22ed: Status 404 returned error can't find the container with id 46ced52c5ea988369e91648d6aa9254aca236f85f7464b50f7141e7c5eab22ed Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.120193 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-rrb4t" Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.120219 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-rrb4t" event={"ID":"e60f2f91-ff70-4ffb-86a1-653403235ef3","Type":"ContainerDied","Data":"61620bf739b1b195ea1b9b4787902e5cfa443a1b02d9c2fbfd0299d5e000a6d9"} Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.120812 4725 scope.go:117] "RemoveContainer" containerID="1fb24b6b867500e92320a06f9243a28c8f3a26173c361414e34c0f94c6976f65" Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.126013 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-n8mrx" event={"ID":"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3","Type":"ContainerStarted","Data":"af525c32cc8dc653c99574eba7c39349958847bcac7fd9f72c617fe843a10b49"} Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.126299 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b5c85b87-n8mrx" Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.142629 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2aa09706-8333-48d8-8bde-64b4ccba5d09","Type":"ContainerStarted","Data":"bca41fd1286809743b4fc90069c0a9116affc9e0e572ab2601a211bbc0ae4162"} Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.146865 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5","Type":"ContainerStarted","Data":"87d1af5334ddcb12b15a351243fe780f7d76938c4d41bf5ec075198137890a83"} Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.156246 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-gzbkx" event={"ID":"ec415043-bd33-4ab3-8437-28eda0458656","Type":"ContainerStarted","Data":"46ced52c5ea988369e91648d6aa9254aca236f85f7464b50f7141e7c5eab22ed"} Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.166320 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b5c85b87-n8mrx" podStartSLOduration=4.166301616 podStartE2EDuration="4.166301616s" podCreationTimestamp="2025-10-14 13:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:32:12.154002551 +0000 UTC m=+1049.002437360" watchObservedRunningTime="2025-10-14 13:32:12.166301616 +0000 UTC m=+1049.014736425" Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.172272 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rwcgw" event={"ID":"1f19eb4e-4359-4092-a050-e1d695fbb891","Type":"ContainerStarted","Data":"7d5530e131f29f44828af0626d0373e717bf75c34fb9b2763a11d8faef9ac46d"} Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.228582 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-rrb4t"] Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.234237 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-rrb4t"] Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.417004 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.424188 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-5sg8k"] Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.514431 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-684dcc5995-mbkf4"] Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.535202 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.560545 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-65fbf987c-h98xp"] Oct 14 13:32:12 crc kubenswrapper[4725]: E1014 13:32:12.561022 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e60f2f91-ff70-4ffb-86a1-653403235ef3" containerName="init" Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.561043 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="e60f2f91-ff70-4ffb-86a1-653403235ef3" containerName="init" Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.561253 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="e60f2f91-ff70-4ffb-86a1-653403235ef3" containerName="init" Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.562387 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65fbf987c-h98xp" Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.576051 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65fbf987c-h98xp"] Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.625759 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.690412 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wf6h\" (UniqueName: \"kubernetes.io/projected/f775e4a2-1844-4b66-ada3-15ce61e06ed7-kube-api-access-8wf6h\") pod \"horizon-65fbf987c-h98xp\" (UID: \"f775e4a2-1844-4b66-ada3-15ce61e06ed7\") " pod="openstack/horizon-65fbf987c-h98xp" Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.690860 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f775e4a2-1844-4b66-ada3-15ce61e06ed7-config-data\") pod \"horizon-65fbf987c-h98xp\" (UID: \"f775e4a2-1844-4b66-ada3-15ce61e06ed7\") " pod="openstack/horizon-65fbf987c-h98xp" Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.691058 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f775e4a2-1844-4b66-ada3-15ce61e06ed7-logs\") pod \"horizon-65fbf987c-h98xp\" (UID: \"f775e4a2-1844-4b66-ada3-15ce61e06ed7\") " pod="openstack/horizon-65fbf987c-h98xp" Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.691167 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f775e4a2-1844-4b66-ada3-15ce61e06ed7-scripts\") pod \"horizon-65fbf987c-h98xp\" (UID: \"f775e4a2-1844-4b66-ada3-15ce61e06ed7\") " pod="openstack/horizon-65fbf987c-h98xp" Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.691257 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f775e4a2-1844-4b66-ada3-15ce61e06ed7-horizon-secret-key\") pod \"horizon-65fbf987c-h98xp\" (UID: \"f775e4a2-1844-4b66-ada3-15ce61e06ed7\") " pod="openstack/horizon-65fbf987c-h98xp" Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.792870 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f775e4a2-1844-4b66-ada3-15ce61e06ed7-horizon-secret-key\") pod \"horizon-65fbf987c-h98xp\" (UID: \"f775e4a2-1844-4b66-ada3-15ce61e06ed7\") " pod="openstack/horizon-65fbf987c-h98xp" Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.792973 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wf6h\" (UniqueName: \"kubernetes.io/projected/f775e4a2-1844-4b66-ada3-15ce61e06ed7-kube-api-access-8wf6h\") pod \"horizon-65fbf987c-h98xp\" (UID: \"f775e4a2-1844-4b66-ada3-15ce61e06ed7\") " pod="openstack/horizon-65fbf987c-h98xp" Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.793027 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f775e4a2-1844-4b66-ada3-15ce61e06ed7-config-data\") pod \"horizon-65fbf987c-h98xp\" (UID: \"f775e4a2-1844-4b66-ada3-15ce61e06ed7\") " pod="openstack/horizon-65fbf987c-h98xp" Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.793113 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f775e4a2-1844-4b66-ada3-15ce61e06ed7-logs\") pod \"horizon-65fbf987c-h98xp\" (UID: \"f775e4a2-1844-4b66-ada3-15ce61e06ed7\") " pod="openstack/horizon-65fbf987c-h98xp" Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.793156 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f775e4a2-1844-4b66-ada3-15ce61e06ed7-scripts\") pod \"horizon-65fbf987c-h98xp\" (UID: \"f775e4a2-1844-4b66-ada3-15ce61e06ed7\") " pod="openstack/horizon-65fbf987c-h98xp" Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.794170 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f775e4a2-1844-4b66-ada3-15ce61e06ed7-logs\") pod \"horizon-65fbf987c-h98xp\" (UID: \"f775e4a2-1844-4b66-ada3-15ce61e06ed7\") " pod="openstack/horizon-65fbf987c-h98xp" Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.794412 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f775e4a2-1844-4b66-ada3-15ce61e06ed7-scripts\") pod \"horizon-65fbf987c-h98xp\" (UID: \"f775e4a2-1844-4b66-ada3-15ce61e06ed7\") " pod="openstack/horizon-65fbf987c-h98xp" Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.795285 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f775e4a2-1844-4b66-ada3-15ce61e06ed7-config-data\") pod \"horizon-65fbf987c-h98xp\" (UID: \"f775e4a2-1844-4b66-ada3-15ce61e06ed7\") " pod="openstack/horizon-65fbf987c-h98xp" Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.808249 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f775e4a2-1844-4b66-ada3-15ce61e06ed7-horizon-secret-key\") pod \"horizon-65fbf987c-h98xp\" (UID: \"f775e4a2-1844-4b66-ada3-15ce61e06ed7\") " pod="openstack/horizon-65fbf987c-h98xp" Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.815308 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wf6h\" (UniqueName: \"kubernetes.io/projected/f775e4a2-1844-4b66-ada3-15ce61e06ed7-kube-api-access-8wf6h\") pod \"horizon-65fbf987c-h98xp\" (UID: \"f775e4a2-1844-4b66-ada3-15ce61e06ed7\") " pod="openstack/horizon-65fbf987c-h98xp" Oct 14 13:32:12 crc kubenswrapper[4725]: I1014 13:32:12.939168 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65fbf987c-h98xp" Oct 14 13:32:13 crc kubenswrapper[4725]: I1014 13:32:13.192971 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5sg8k" event={"ID":"9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0","Type":"ContainerStarted","Data":"9b0a728ed29905664ea6e13e6214b56b36b8af5b20176fbc4a20a1123e1ddfdf"} Oct 14 13:32:13 crc kubenswrapper[4725]: I1014 13:32:13.193233 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5sg8k" event={"ID":"9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0","Type":"ContainerStarted","Data":"96bda005a59ddbfd6f2997b5616778b15e790a7515400e6fee9f6d25de5716fe"} Oct 14 13:32:13 crc kubenswrapper[4725]: I1014 13:32:13.198213 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2aa09706-8333-48d8-8bde-64b4ccba5d09","Type":"ContainerStarted","Data":"898fa800cc1850497048f937d1785302a58371cf54a67084fd0e17d1c9139202"} Oct 14 13:32:13 crc kubenswrapper[4725]: I1014 13:32:13.198390 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2aa09706-8333-48d8-8bde-64b4ccba5d09" containerName="glance-log" containerID="cri-o://bca41fd1286809743b4fc90069c0a9116affc9e0e572ab2601a211bbc0ae4162" gracePeriod=30 Oct 14 13:32:13 crc kubenswrapper[4725]: I1014 13:32:13.198606 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2aa09706-8333-48d8-8bde-64b4ccba5d09" containerName="glance-httpd" containerID="cri-o://898fa800cc1850497048f937d1785302a58371cf54a67084fd0e17d1c9139202" gracePeriod=30 Oct 14 13:32:13 crc kubenswrapper[4725]: I1014 13:32:13.211650 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-5sg8k" podStartSLOduration=2.211624507 podStartE2EDuration="2.211624507s" podCreationTimestamp="2025-10-14 13:32:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:32:13.208500672 +0000 UTC m=+1050.056935471" watchObservedRunningTime="2025-10-14 13:32:13.211624507 +0000 UTC m=+1050.060059316" Oct 14 13:32:13 crc kubenswrapper[4725]: I1014 13:32:13.214120 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5","Type":"ContainerStarted","Data":"c54365e21dec2b85fc1dd23f52e73d5be1265474d48de1cb876b46d06d305f4e"} Oct 14 13:32:13 crc kubenswrapper[4725]: I1014 13:32:13.234919 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.23490105 podStartE2EDuration="5.23490105s" podCreationTimestamp="2025-10-14 13:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:32:13.226956023 +0000 UTC m=+1050.075390842" watchObservedRunningTime="2025-10-14 13:32:13.23490105 +0000 UTC m=+1050.083335859" Oct 14 13:32:13 crc kubenswrapper[4725]: I1014 13:32:13.269555 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.269535741 podStartE2EDuration="5.269535741s" podCreationTimestamp="2025-10-14 13:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:32:13.260312971 +0000 UTC m=+1050.108747780" watchObservedRunningTime="2025-10-14 13:32:13.269535741 +0000 UTC m=+1050.117970550" Oct 14 13:32:13 crc kubenswrapper[4725]: W1014 13:32:13.524756 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf775e4a2_1844_4b66_ada3_15ce61e06ed7.slice/crio-4ebef32560a377b172824a18f83cda034f0431e3eaf5803d6fbaa72d02ffbbb2 WatchSource:0}: Error finding container 4ebef32560a377b172824a18f83cda034f0431e3eaf5803d6fbaa72d02ffbbb2: Status 404 returned error can't find the container with id 4ebef32560a377b172824a18f83cda034f0431e3eaf5803d6fbaa72d02ffbbb2 Oct 14 13:32:13 crc kubenswrapper[4725]: I1014 13:32:13.525740 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65fbf987c-h98xp"] Oct 14 13:32:13 crc kubenswrapper[4725]: I1014 13:32:13.947427 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e60f2f91-ff70-4ffb-86a1-653403235ef3" path="/var/lib/kubelet/pods/e60f2f91-ff70-4ffb-86a1-653403235ef3/volumes" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.115416 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.231539 4725 generic.go:334] "Generic (PLEG): container finished" podID="2aa09706-8333-48d8-8bde-64b4ccba5d09" containerID="898fa800cc1850497048f937d1785302a58371cf54a67084fd0e17d1c9139202" exitCode=0 Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.231577 4725 generic.go:334] "Generic (PLEG): container finished" podID="2aa09706-8333-48d8-8bde-64b4ccba5d09" containerID="bca41fd1286809743b4fc90069c0a9116affc9e0e572ab2601a211bbc0ae4162" exitCode=143 Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.231661 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2aa09706-8333-48d8-8bde-64b4ccba5d09","Type":"ContainerDied","Data":"898fa800cc1850497048f937d1785302a58371cf54a67084fd0e17d1c9139202"} Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.231692 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2aa09706-8333-48d8-8bde-64b4ccba5d09","Type":"ContainerDied","Data":"bca41fd1286809743b4fc90069c0a9116affc9e0e572ab2601a211bbc0ae4162"} Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.231707 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.231720 4725 scope.go:117] "RemoveContainer" containerID="898fa800cc1850497048f937d1785302a58371cf54a67084fd0e17d1c9139202" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.231705 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2aa09706-8333-48d8-8bde-64b4ccba5d09","Type":"ContainerDied","Data":"3d4f1ea1e8562b7f1b169c0cfa0f4cdb69e963425344250140fdb6f5bbcd98c4"} Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.233532 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65fbf987c-h98xp" event={"ID":"f775e4a2-1844-4b66-ada3-15ce61e06ed7","Type":"ContainerStarted","Data":"4ebef32560a377b172824a18f83cda034f0431e3eaf5803d6fbaa72d02ffbbb2"} Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.233680 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5" containerName="glance-log" containerID="cri-o://87d1af5334ddcb12b15a351243fe780f7d76938c4d41bf5ec075198137890a83" gracePeriod=30 Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.233735 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5" containerName="glance-httpd" containerID="cri-o://c54365e21dec2b85fc1dd23f52e73d5be1265474d48de1cb876b46d06d305f4e" gracePeriod=30 Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.254007 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aa09706-8333-48d8-8bde-64b4ccba5d09-scripts\") pod \"2aa09706-8333-48d8-8bde-64b4ccba5d09\" (UID: \"2aa09706-8333-48d8-8bde-64b4ccba5d09\") " Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.254068 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"2aa09706-8333-48d8-8bde-64b4ccba5d09\" (UID: \"2aa09706-8333-48d8-8bde-64b4ccba5d09\") " Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.254138 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6b9jj\" (UniqueName: \"kubernetes.io/projected/2aa09706-8333-48d8-8bde-64b4ccba5d09-kube-api-access-6b9jj\") pod \"2aa09706-8333-48d8-8bde-64b4ccba5d09\" (UID: \"2aa09706-8333-48d8-8bde-64b4ccba5d09\") " Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.254162 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aa09706-8333-48d8-8bde-64b4ccba5d09-config-data\") pod \"2aa09706-8333-48d8-8bde-64b4ccba5d09\" (UID: \"2aa09706-8333-48d8-8bde-64b4ccba5d09\") " Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.254213 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2aa09706-8333-48d8-8bde-64b4ccba5d09-httpd-run\") pod \"2aa09706-8333-48d8-8bde-64b4ccba5d09\" (UID: \"2aa09706-8333-48d8-8bde-64b4ccba5d09\") " Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.254236 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa09706-8333-48d8-8bde-64b4ccba5d09-combined-ca-bundle\") pod \"2aa09706-8333-48d8-8bde-64b4ccba5d09\" (UID: \"2aa09706-8333-48d8-8bde-64b4ccba5d09\") " Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.254306 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2aa09706-8333-48d8-8bde-64b4ccba5d09-logs\") pod \"2aa09706-8333-48d8-8bde-64b4ccba5d09\" (UID: \"2aa09706-8333-48d8-8bde-64b4ccba5d09\") " Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.255066 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aa09706-8333-48d8-8bde-64b4ccba5d09-logs" (OuterVolumeSpecName: "logs") pod "2aa09706-8333-48d8-8bde-64b4ccba5d09" (UID: "2aa09706-8333-48d8-8bde-64b4ccba5d09"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.260130 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aa09706-8333-48d8-8bde-64b4ccba5d09-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2aa09706-8333-48d8-8bde-64b4ccba5d09" (UID: "2aa09706-8333-48d8-8bde-64b4ccba5d09"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.260370 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "2aa09706-8333-48d8-8bde-64b4ccba5d09" (UID: "2aa09706-8333-48d8-8bde-64b4ccba5d09"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.271373 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aa09706-8333-48d8-8bde-64b4ccba5d09-scripts" (OuterVolumeSpecName: "scripts") pod "2aa09706-8333-48d8-8bde-64b4ccba5d09" (UID: "2aa09706-8333-48d8-8bde-64b4ccba5d09"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.273647 4725 scope.go:117] "RemoveContainer" containerID="bca41fd1286809743b4fc90069c0a9116affc9e0e572ab2601a211bbc0ae4162" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.284119 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aa09706-8333-48d8-8bde-64b4ccba5d09-kube-api-access-6b9jj" (OuterVolumeSpecName: "kube-api-access-6b9jj") pod "2aa09706-8333-48d8-8bde-64b4ccba5d09" (UID: "2aa09706-8333-48d8-8bde-64b4ccba5d09"). InnerVolumeSpecName "kube-api-access-6b9jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.290826 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aa09706-8333-48d8-8bde-64b4ccba5d09-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2aa09706-8333-48d8-8bde-64b4ccba5d09" (UID: "2aa09706-8333-48d8-8bde-64b4ccba5d09"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.313580 4725 scope.go:117] "RemoveContainer" containerID="898fa800cc1850497048f937d1785302a58371cf54a67084fd0e17d1c9139202" Oct 14 13:32:14 crc kubenswrapper[4725]: E1014 13:32:14.314271 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"898fa800cc1850497048f937d1785302a58371cf54a67084fd0e17d1c9139202\": container with ID starting with 898fa800cc1850497048f937d1785302a58371cf54a67084fd0e17d1c9139202 not found: ID does not exist" containerID="898fa800cc1850497048f937d1785302a58371cf54a67084fd0e17d1c9139202" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.314305 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"898fa800cc1850497048f937d1785302a58371cf54a67084fd0e17d1c9139202"} err="failed to get container status \"898fa800cc1850497048f937d1785302a58371cf54a67084fd0e17d1c9139202\": rpc error: code = NotFound desc = could not find container \"898fa800cc1850497048f937d1785302a58371cf54a67084fd0e17d1c9139202\": container with ID starting with 898fa800cc1850497048f937d1785302a58371cf54a67084fd0e17d1c9139202 not found: ID does not exist" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.314325 4725 scope.go:117] "RemoveContainer" containerID="bca41fd1286809743b4fc90069c0a9116affc9e0e572ab2601a211bbc0ae4162" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.315894 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aa09706-8333-48d8-8bde-64b4ccba5d09-config-data" (OuterVolumeSpecName: "config-data") pod "2aa09706-8333-48d8-8bde-64b4ccba5d09" (UID: "2aa09706-8333-48d8-8bde-64b4ccba5d09"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:14 crc kubenswrapper[4725]: E1014 13:32:14.320107 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bca41fd1286809743b4fc90069c0a9116affc9e0e572ab2601a211bbc0ae4162\": container with ID starting with bca41fd1286809743b4fc90069c0a9116affc9e0e572ab2601a211bbc0ae4162 not found: ID does not exist" containerID="bca41fd1286809743b4fc90069c0a9116affc9e0e572ab2601a211bbc0ae4162" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.320163 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bca41fd1286809743b4fc90069c0a9116affc9e0e572ab2601a211bbc0ae4162"} err="failed to get container status \"bca41fd1286809743b4fc90069c0a9116affc9e0e572ab2601a211bbc0ae4162\": rpc error: code = NotFound desc = could not find container \"bca41fd1286809743b4fc90069c0a9116affc9e0e572ab2601a211bbc0ae4162\": container with ID starting with bca41fd1286809743b4fc90069c0a9116affc9e0e572ab2601a211bbc0ae4162 not found: ID does not exist" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.320194 4725 scope.go:117] "RemoveContainer" containerID="898fa800cc1850497048f937d1785302a58371cf54a67084fd0e17d1c9139202" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.320648 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"898fa800cc1850497048f937d1785302a58371cf54a67084fd0e17d1c9139202"} err="failed to get container status \"898fa800cc1850497048f937d1785302a58371cf54a67084fd0e17d1c9139202\": rpc error: code = NotFound desc = could not find container \"898fa800cc1850497048f937d1785302a58371cf54a67084fd0e17d1c9139202\": container with ID starting with 898fa800cc1850497048f937d1785302a58371cf54a67084fd0e17d1c9139202 not found: ID does not exist" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.320678 4725 scope.go:117] "RemoveContainer" containerID="bca41fd1286809743b4fc90069c0a9116affc9e0e572ab2601a211bbc0ae4162" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.322368 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bca41fd1286809743b4fc90069c0a9116affc9e0e572ab2601a211bbc0ae4162"} err="failed to get container status \"bca41fd1286809743b4fc90069c0a9116affc9e0e572ab2601a211bbc0ae4162\": rpc error: code = NotFound desc = could not find container \"bca41fd1286809743b4fc90069c0a9116affc9e0e572ab2601a211bbc0ae4162\": container with ID starting with bca41fd1286809743b4fc90069c0a9116affc9e0e572ab2601a211bbc0ae4162 not found: ID does not exist" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.356445 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6b9jj\" (UniqueName: \"kubernetes.io/projected/2aa09706-8333-48d8-8bde-64b4ccba5d09-kube-api-access-6b9jj\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.356492 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aa09706-8333-48d8-8bde-64b4ccba5d09-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.356502 4725 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2aa09706-8333-48d8-8bde-64b4ccba5d09-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.356511 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aa09706-8333-48d8-8bde-64b4ccba5d09-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.356519 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2aa09706-8333-48d8-8bde-64b4ccba5d09-logs\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.356528 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aa09706-8333-48d8-8bde-64b4ccba5d09-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.356550 4725 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.379197 4725 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.458630 4725 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.623532 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.660525 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.687420 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 13:32:14 crc kubenswrapper[4725]: E1014 13:32:14.687840 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa09706-8333-48d8-8bde-64b4ccba5d09" containerName="glance-log" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.687856 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa09706-8333-48d8-8bde-64b4ccba5d09" containerName="glance-log" Oct 14 13:32:14 crc kubenswrapper[4725]: E1014 13:32:14.687870 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa09706-8333-48d8-8bde-64b4ccba5d09" containerName="glance-httpd" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.687875 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa09706-8333-48d8-8bde-64b4ccba5d09" containerName="glance-httpd" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.688064 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aa09706-8333-48d8-8bde-64b4ccba5d09" containerName="glance-httpd" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.688081 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aa09706-8333-48d8-8bde-64b4ccba5d09" containerName="glance-log" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.689028 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.699876 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.712371 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.867768 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ffcd111-cd30-4d4e-933f-5c82abf7de93-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.867847 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ffcd111-cd30-4d4e-933f-5c82abf7de93-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.867877 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ffcd111-cd30-4d4e-933f-5c82abf7de93-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.867940 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.867961 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ffcd111-cd30-4d4e-933f-5c82abf7de93-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.868041 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ffcd111-cd30-4d4e-933f-5c82abf7de93-logs\") pod \"glance-default-internal-api-0\" (UID: \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.868217 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppxvr\" (UniqueName: \"kubernetes.io/projected/3ffcd111-cd30-4d4e-933f-5c82abf7de93-kube-api-access-ppxvr\") pod \"glance-default-internal-api-0\" (UID: \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.971742 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ffcd111-cd30-4d4e-933f-5c82abf7de93-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.972350 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ffcd111-cd30-4d4e-933f-5c82abf7de93-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.972407 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ffcd111-cd30-4d4e-933f-5c82abf7de93-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.972503 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.972535 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ffcd111-cd30-4d4e-933f-5c82abf7de93-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.972619 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ffcd111-cd30-4d4e-933f-5c82abf7de93-logs\") pod \"glance-default-internal-api-0\" (UID: \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.972644 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppxvr\" (UniqueName: \"kubernetes.io/projected/3ffcd111-cd30-4d4e-933f-5c82abf7de93-kube-api-access-ppxvr\") pod \"glance-default-internal-api-0\" (UID: \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.972671 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.973090 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ffcd111-cd30-4d4e-933f-5c82abf7de93-logs\") pod \"glance-default-internal-api-0\" (UID: \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.973134 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ffcd111-cd30-4d4e-933f-5c82abf7de93-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.976750 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ffcd111-cd30-4d4e-933f-5c82abf7de93-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.977236 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ffcd111-cd30-4d4e-933f-5c82abf7de93-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.995952 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ffcd111-cd30-4d4e-933f-5c82abf7de93-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:14 crc kubenswrapper[4725]: I1014 13:32:14.999205 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppxvr\" (UniqueName: \"kubernetes.io/projected/3ffcd111-cd30-4d4e-933f-5c82abf7de93-kube-api-access-ppxvr\") pod \"glance-default-internal-api-0\" (UID: \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:15 crc kubenswrapper[4725]: I1014 13:32:15.043998 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:15 crc kubenswrapper[4725]: I1014 13:32:15.255843 4725 generic.go:334] "Generic (PLEG): container finished" podID="8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5" containerID="c54365e21dec2b85fc1dd23f52e73d5be1265474d48de1cb876b46d06d305f4e" exitCode=0 Oct 14 13:32:15 crc kubenswrapper[4725]: I1014 13:32:15.255880 4725 generic.go:334] "Generic (PLEG): container finished" podID="8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5" containerID="87d1af5334ddcb12b15a351243fe780f7d76938c4d41bf5ec075198137890a83" exitCode=143 Oct 14 13:32:15 crc kubenswrapper[4725]: I1014 13:32:15.255937 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5","Type":"ContainerDied","Data":"c54365e21dec2b85fc1dd23f52e73d5be1265474d48de1cb876b46d06d305f4e"} Oct 14 13:32:15 crc kubenswrapper[4725]: I1014 13:32:15.255984 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5","Type":"ContainerDied","Data":"87d1af5334ddcb12b15a351243fe780f7d76938c4d41bf5ec075198137890a83"} Oct 14 13:32:15 crc kubenswrapper[4725]: I1014 13:32:15.335112 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 13:32:15 crc kubenswrapper[4725]: I1014 13:32:15.932581 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2aa09706-8333-48d8-8bde-64b4ccba5d09" path="/var/lib/kubelet/pods/2aa09706-8333-48d8-8bde-64b4ccba5d09/volumes" Oct 14 13:32:16 crc kubenswrapper[4725]: I1014 13:32:16.274427 4725 generic.go:334] "Generic (PLEG): container finished" podID="16644f47-970c-4ed8-b44d-04a8f4765a63" containerID="a5d1b40ce1d3e6b000dc8866622c16730ff977dee3bfbf640f75c48553e2600b" exitCode=0 Oct 14 13:32:16 crc kubenswrapper[4725]: I1014 13:32:16.274497 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2sx29" event={"ID":"16644f47-970c-4ed8-b44d-04a8f4765a63","Type":"ContainerDied","Data":"a5d1b40ce1d3e6b000dc8866622c16730ff977dee3bfbf640f75c48553e2600b"} Oct 14 13:32:16 crc kubenswrapper[4725]: I1014 13:32:16.631123 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 13:32:18 crc kubenswrapper[4725]: I1014 13:32:18.493471 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2sx29" Oct 14 13:32:18 crc kubenswrapper[4725]: I1014 13:32:18.635868 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/16644f47-970c-4ed8-b44d-04a8f4765a63-credential-keys\") pod \"16644f47-970c-4ed8-b44d-04a8f4765a63\" (UID: \"16644f47-970c-4ed8-b44d-04a8f4765a63\") " Oct 14 13:32:18 crc kubenswrapper[4725]: I1014 13:32:18.635930 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16644f47-970c-4ed8-b44d-04a8f4765a63-config-data\") pod \"16644f47-970c-4ed8-b44d-04a8f4765a63\" (UID: \"16644f47-970c-4ed8-b44d-04a8f4765a63\") " Oct 14 13:32:18 crc kubenswrapper[4725]: I1014 13:32:18.635960 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16644f47-970c-4ed8-b44d-04a8f4765a63-combined-ca-bundle\") pod \"16644f47-970c-4ed8-b44d-04a8f4765a63\" (UID: \"16644f47-970c-4ed8-b44d-04a8f4765a63\") " Oct 14 13:32:18 crc kubenswrapper[4725]: I1014 13:32:18.635992 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmfrq\" (UniqueName: \"kubernetes.io/projected/16644f47-970c-4ed8-b44d-04a8f4765a63-kube-api-access-zmfrq\") pod \"16644f47-970c-4ed8-b44d-04a8f4765a63\" (UID: \"16644f47-970c-4ed8-b44d-04a8f4765a63\") " Oct 14 13:32:18 crc kubenswrapper[4725]: I1014 13:32:18.636053 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16644f47-970c-4ed8-b44d-04a8f4765a63-scripts\") pod \"16644f47-970c-4ed8-b44d-04a8f4765a63\" (UID: \"16644f47-970c-4ed8-b44d-04a8f4765a63\") " Oct 14 13:32:18 crc kubenswrapper[4725]: I1014 13:32:18.636255 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16644f47-970c-4ed8-b44d-04a8f4765a63-fernet-keys\") pod \"16644f47-970c-4ed8-b44d-04a8f4765a63\" (UID: \"16644f47-970c-4ed8-b44d-04a8f4765a63\") " Oct 14 13:32:18 crc kubenswrapper[4725]: I1014 13:32:18.649448 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16644f47-970c-4ed8-b44d-04a8f4765a63-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "16644f47-970c-4ed8-b44d-04a8f4765a63" (UID: "16644f47-970c-4ed8-b44d-04a8f4765a63"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:18 crc kubenswrapper[4725]: I1014 13:32:18.649732 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16644f47-970c-4ed8-b44d-04a8f4765a63-kube-api-access-zmfrq" (OuterVolumeSpecName: "kube-api-access-zmfrq") pod "16644f47-970c-4ed8-b44d-04a8f4765a63" (UID: "16644f47-970c-4ed8-b44d-04a8f4765a63"). InnerVolumeSpecName "kube-api-access-zmfrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:32:18 crc kubenswrapper[4725]: I1014 13:32:18.659748 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16644f47-970c-4ed8-b44d-04a8f4765a63-scripts" (OuterVolumeSpecName: "scripts") pod "16644f47-970c-4ed8-b44d-04a8f4765a63" (UID: "16644f47-970c-4ed8-b44d-04a8f4765a63"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:18 crc kubenswrapper[4725]: I1014 13:32:18.659962 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16644f47-970c-4ed8-b44d-04a8f4765a63-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "16644f47-970c-4ed8-b44d-04a8f4765a63" (UID: "16644f47-970c-4ed8-b44d-04a8f4765a63"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:18 crc kubenswrapper[4725]: I1014 13:32:18.685937 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16644f47-970c-4ed8-b44d-04a8f4765a63-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16644f47-970c-4ed8-b44d-04a8f4765a63" (UID: "16644f47-970c-4ed8-b44d-04a8f4765a63"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:18 crc kubenswrapper[4725]: I1014 13:32:18.694928 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16644f47-970c-4ed8-b44d-04a8f4765a63-config-data" (OuterVolumeSpecName: "config-data") pod "16644f47-970c-4ed8-b44d-04a8f4765a63" (UID: "16644f47-970c-4ed8-b44d-04a8f4765a63"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:18 crc kubenswrapper[4725]: I1014 13:32:18.738816 4725 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16644f47-970c-4ed8-b44d-04a8f4765a63-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:18 crc kubenswrapper[4725]: I1014 13:32:18.738860 4725 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/16644f47-970c-4ed8-b44d-04a8f4765a63-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:18 crc kubenswrapper[4725]: I1014 13:32:18.738872 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16644f47-970c-4ed8-b44d-04a8f4765a63-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:18 crc kubenswrapper[4725]: I1014 13:32:18.738880 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16644f47-970c-4ed8-b44d-04a8f4765a63-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:18 crc kubenswrapper[4725]: I1014 13:32:18.738890 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmfrq\" (UniqueName: \"kubernetes.io/projected/16644f47-970c-4ed8-b44d-04a8f4765a63-kube-api-access-zmfrq\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:18 crc kubenswrapper[4725]: I1014 13:32:18.738900 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16644f47-970c-4ed8-b44d-04a8f4765a63-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:18 crc kubenswrapper[4725]: I1014 13:32:18.984569 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 13:32:18 crc kubenswrapper[4725]: W1014 13:32:18.988876 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ffcd111_cd30_4d4e_933f_5c82abf7de93.slice/crio-b48f4cb54af856670f4177c8d8efc0debd3c6ff6265dd2b0b252ab37e31687d6 WatchSource:0}: Error finding container b48f4cb54af856670f4177c8d8efc0debd3c6ff6265dd2b0b252ab37e31687d6: Status 404 returned error can't find the container with id b48f4cb54af856670f4177c8d8efc0debd3c6ff6265dd2b0b252ab37e31687d6 Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.265616 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b5c85b87-n8mrx" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.287672 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f49767877-s5fb5"] Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.366108 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2sx29" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.366740 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-9d5c84b44-vssnm"] Oct 14 13:32:19 crc kubenswrapper[4725]: E1014 13:32:19.367647 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16644f47-970c-4ed8-b44d-04a8f4765a63" containerName="keystone-bootstrap" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.367686 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="16644f47-970c-4ed8-b44d-04a8f4765a63" containerName="keystone-bootstrap" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.368195 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="16644f47-970c-4ed8-b44d-04a8f4765a63" containerName="keystone-bootstrap" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.369995 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2sx29" event={"ID":"16644f47-970c-4ed8-b44d-04a8f4765a63","Type":"ContainerDied","Data":"07e0f5b82c539566355b10bc94d08d4a2a5a535cd372f08e6a5a24265b8ed692"} Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.370043 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07e0f5b82c539566355b10bc94d08d4a2a5a535cd372f08e6a5a24265b8ed692" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.370146 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9d5c84b44-vssnm" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.373812 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.380397 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3ffcd111-cd30-4d4e-933f-5c82abf7de93","Type":"ContainerStarted","Data":"b48f4cb54af856670f4177c8d8efc0debd3c6ff6265dd2b0b252ab37e31687d6"} Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.408232 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-9d5c84b44-vssnm"] Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.466208 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-kqvmq"] Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.466817 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-kqvmq" podUID="4dfca925-34cf-487b-a4ad-5a16a8f24b65" containerName="dnsmasq-dns" containerID="cri-o://4f89fdc5517f31dcbd79fff79e47536fe39014c4c380886c42baf9a737b3885c" gracePeriod=10 Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.468375 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/551500de-77a9-4f28-ab4b-b8259e04804b-combined-ca-bundle\") pod \"horizon-9d5c84b44-vssnm\" (UID: \"551500de-77a9-4f28-ab4b-b8259e04804b\") " pod="openstack/horizon-9d5c84b44-vssnm" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.485964 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkvfr\" (UniqueName: \"kubernetes.io/projected/551500de-77a9-4f28-ab4b-b8259e04804b-kube-api-access-qkvfr\") pod \"horizon-9d5c84b44-vssnm\" (UID: \"551500de-77a9-4f28-ab4b-b8259e04804b\") " pod="openstack/horizon-9d5c84b44-vssnm" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.486198 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/551500de-77a9-4f28-ab4b-b8259e04804b-scripts\") pod \"horizon-9d5c84b44-vssnm\" (UID: \"551500de-77a9-4f28-ab4b-b8259e04804b\") " pod="openstack/horizon-9d5c84b44-vssnm" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.486303 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/551500de-77a9-4f28-ab4b-b8259e04804b-logs\") pod \"horizon-9d5c84b44-vssnm\" (UID: \"551500de-77a9-4f28-ab4b-b8259e04804b\") " pod="openstack/horizon-9d5c84b44-vssnm" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.486330 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/551500de-77a9-4f28-ab4b-b8259e04804b-config-data\") pod \"horizon-9d5c84b44-vssnm\" (UID: \"551500de-77a9-4f28-ab4b-b8259e04804b\") " pod="openstack/horizon-9d5c84b44-vssnm" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.486363 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/551500de-77a9-4f28-ab4b-b8259e04804b-horizon-tls-certs\") pod \"horizon-9d5c84b44-vssnm\" (UID: \"551500de-77a9-4f28-ab4b-b8259e04804b\") " pod="openstack/horizon-9d5c84b44-vssnm" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.486393 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/551500de-77a9-4f28-ab4b-b8259e04804b-horizon-secret-key\") pod \"horizon-9d5c84b44-vssnm\" (UID: \"551500de-77a9-4f28-ab4b-b8259e04804b\") " pod="openstack/horizon-9d5c84b44-vssnm" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.518673 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-65fbf987c-h98xp"] Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.562834 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7cdf854644-xbv6p"] Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.569308 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cdf854644-xbv6p" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.589517 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/551500de-77a9-4f28-ab4b-b8259e04804b-scripts\") pod \"horizon-9d5c84b44-vssnm\" (UID: \"551500de-77a9-4f28-ab4b-b8259e04804b\") " pod="openstack/horizon-9d5c84b44-vssnm" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.589726 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/551500de-77a9-4f28-ab4b-b8259e04804b-logs\") pod \"horizon-9d5c84b44-vssnm\" (UID: \"551500de-77a9-4f28-ab4b-b8259e04804b\") " pod="openstack/horizon-9d5c84b44-vssnm" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.589822 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/551500de-77a9-4f28-ab4b-b8259e04804b-config-data\") pod \"horizon-9d5c84b44-vssnm\" (UID: \"551500de-77a9-4f28-ab4b-b8259e04804b\") " pod="openstack/horizon-9d5c84b44-vssnm" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.589904 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/551500de-77a9-4f28-ab4b-b8259e04804b-horizon-tls-certs\") pod \"horizon-9d5c84b44-vssnm\" (UID: \"551500de-77a9-4f28-ab4b-b8259e04804b\") " pod="openstack/horizon-9d5c84b44-vssnm" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.589939 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/551500de-77a9-4f28-ab4b-b8259e04804b-horizon-secret-key\") pod \"horizon-9d5c84b44-vssnm\" (UID: \"551500de-77a9-4f28-ab4b-b8259e04804b\") " pod="openstack/horizon-9d5c84b44-vssnm" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.590137 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/551500de-77a9-4f28-ab4b-b8259e04804b-combined-ca-bundle\") pod \"horizon-9d5c84b44-vssnm\" (UID: \"551500de-77a9-4f28-ab4b-b8259e04804b\") " pod="openstack/horizon-9d5c84b44-vssnm" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.590252 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkvfr\" (UniqueName: \"kubernetes.io/projected/551500de-77a9-4f28-ab4b-b8259e04804b-kube-api-access-qkvfr\") pod \"horizon-9d5c84b44-vssnm\" (UID: \"551500de-77a9-4f28-ab4b-b8259e04804b\") " pod="openstack/horizon-9d5c84b44-vssnm" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.590925 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/551500de-77a9-4f28-ab4b-b8259e04804b-scripts\") pod \"horizon-9d5c84b44-vssnm\" (UID: \"551500de-77a9-4f28-ab4b-b8259e04804b\") " pod="openstack/horizon-9d5c84b44-vssnm" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.590958 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/551500de-77a9-4f28-ab4b-b8259e04804b-logs\") pod \"horizon-9d5c84b44-vssnm\" (UID: \"551500de-77a9-4f28-ab4b-b8259e04804b\") " pod="openstack/horizon-9d5c84b44-vssnm" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.592860 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/551500de-77a9-4f28-ab4b-b8259e04804b-config-data\") pod \"horizon-9d5c84b44-vssnm\" (UID: \"551500de-77a9-4f28-ab4b-b8259e04804b\") " pod="openstack/horizon-9d5c84b44-vssnm" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.595558 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7cdf854644-xbv6p"] Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.605049 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/551500de-77a9-4f28-ab4b-b8259e04804b-horizon-secret-key\") pod \"horizon-9d5c84b44-vssnm\" (UID: \"551500de-77a9-4f28-ab4b-b8259e04804b\") " pod="openstack/horizon-9d5c84b44-vssnm" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.606520 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/551500de-77a9-4f28-ab4b-b8259e04804b-combined-ca-bundle\") pod \"horizon-9d5c84b44-vssnm\" (UID: \"551500de-77a9-4f28-ab4b-b8259e04804b\") " pod="openstack/horizon-9d5c84b44-vssnm" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.617020 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/551500de-77a9-4f28-ab4b-b8259e04804b-horizon-tls-certs\") pod \"horizon-9d5c84b44-vssnm\" (UID: \"551500de-77a9-4f28-ab4b-b8259e04804b\") " pod="openstack/horizon-9d5c84b44-vssnm" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.617222 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkvfr\" (UniqueName: \"kubernetes.io/projected/551500de-77a9-4f28-ab4b-b8259e04804b-kube-api-access-qkvfr\") pod \"horizon-9d5c84b44-vssnm\" (UID: \"551500de-77a9-4f28-ab4b-b8259e04804b\") " pod="openstack/horizon-9d5c84b44-vssnm" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.634614 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-2sx29"] Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.645405 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-2sx29"] Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.691480 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkgwd\" (UniqueName: \"kubernetes.io/projected/0f50192b-c5ae-418d-9d3a-a670d49f8ded-kube-api-access-vkgwd\") pod \"horizon-7cdf854644-xbv6p\" (UID: \"0f50192b-c5ae-418d-9d3a-a670d49f8ded\") " pod="openstack/horizon-7cdf854644-xbv6p" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.691527 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f50192b-c5ae-418d-9d3a-a670d49f8ded-combined-ca-bundle\") pod \"horizon-7cdf854644-xbv6p\" (UID: \"0f50192b-c5ae-418d-9d3a-a670d49f8ded\") " pod="openstack/horizon-7cdf854644-xbv6p" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.691557 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0f50192b-c5ae-418d-9d3a-a670d49f8ded-horizon-secret-key\") pod \"horizon-7cdf854644-xbv6p\" (UID: \"0f50192b-c5ae-418d-9d3a-a670d49f8ded\") " pod="openstack/horizon-7cdf854644-xbv6p" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.691633 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f50192b-c5ae-418d-9d3a-a670d49f8ded-horizon-tls-certs\") pod \"horizon-7cdf854644-xbv6p\" (UID: \"0f50192b-c5ae-418d-9d3a-a670d49f8ded\") " pod="openstack/horizon-7cdf854644-xbv6p" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.691658 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f50192b-c5ae-418d-9d3a-a670d49f8ded-logs\") pod \"horizon-7cdf854644-xbv6p\" (UID: \"0f50192b-c5ae-418d-9d3a-a670d49f8ded\") " pod="openstack/horizon-7cdf854644-xbv6p" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.691691 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0f50192b-c5ae-418d-9d3a-a670d49f8ded-scripts\") pod \"horizon-7cdf854644-xbv6p\" (UID: \"0f50192b-c5ae-418d-9d3a-a670d49f8ded\") " pod="openstack/horizon-7cdf854644-xbv6p" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.691747 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0f50192b-c5ae-418d-9d3a-a670d49f8ded-config-data\") pod \"horizon-7cdf854644-xbv6p\" (UID: \"0f50192b-c5ae-418d-9d3a-a670d49f8ded\") " pod="openstack/horizon-7cdf854644-xbv6p" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.693319 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-tlq7r"] Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.696352 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tlq7r" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.701435 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.701815 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.702020 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-h7l4t" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.702985 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.709512 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tlq7r"] Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.712532 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9d5c84b44-vssnm" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.793655 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f50192b-c5ae-418d-9d3a-a670d49f8ded-horizon-tls-certs\") pod \"horizon-7cdf854644-xbv6p\" (UID: \"0f50192b-c5ae-418d-9d3a-a670d49f8ded\") " pod="openstack/horizon-7cdf854644-xbv6p" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.793707 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f50192b-c5ae-418d-9d3a-a670d49f8ded-logs\") pod \"horizon-7cdf854644-xbv6p\" (UID: \"0f50192b-c5ae-418d-9d3a-a670d49f8ded\") " pod="openstack/horizon-7cdf854644-xbv6p" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.793739 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0f50192b-c5ae-418d-9d3a-a670d49f8ded-scripts\") pod \"horizon-7cdf854644-xbv6p\" (UID: \"0f50192b-c5ae-418d-9d3a-a670d49f8ded\") " pod="openstack/horizon-7cdf854644-xbv6p" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.793762 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22b0bf13-3251-446f-946b-273f89349427-combined-ca-bundle\") pod \"keystone-bootstrap-tlq7r\" (UID: \"22b0bf13-3251-446f-946b-273f89349427\") " pod="openstack/keystone-bootstrap-tlq7r" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.793784 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64w96\" (UniqueName: \"kubernetes.io/projected/22b0bf13-3251-446f-946b-273f89349427-kube-api-access-64w96\") pod \"keystone-bootstrap-tlq7r\" (UID: \"22b0bf13-3251-446f-946b-273f89349427\") " pod="openstack/keystone-bootstrap-tlq7r" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.793805 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/22b0bf13-3251-446f-946b-273f89349427-fernet-keys\") pod \"keystone-bootstrap-tlq7r\" (UID: \"22b0bf13-3251-446f-946b-273f89349427\") " pod="openstack/keystone-bootstrap-tlq7r" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.793824 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0f50192b-c5ae-418d-9d3a-a670d49f8ded-config-data\") pod \"horizon-7cdf854644-xbv6p\" (UID: \"0f50192b-c5ae-418d-9d3a-a670d49f8ded\") " pod="openstack/horizon-7cdf854644-xbv6p" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.793844 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22b0bf13-3251-446f-946b-273f89349427-scripts\") pod \"keystone-bootstrap-tlq7r\" (UID: \"22b0bf13-3251-446f-946b-273f89349427\") " pod="openstack/keystone-bootstrap-tlq7r" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.793870 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkgwd\" (UniqueName: \"kubernetes.io/projected/0f50192b-c5ae-418d-9d3a-a670d49f8ded-kube-api-access-vkgwd\") pod \"horizon-7cdf854644-xbv6p\" (UID: \"0f50192b-c5ae-418d-9d3a-a670d49f8ded\") " pod="openstack/horizon-7cdf854644-xbv6p" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.793887 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f50192b-c5ae-418d-9d3a-a670d49f8ded-combined-ca-bundle\") pod \"horizon-7cdf854644-xbv6p\" (UID: \"0f50192b-c5ae-418d-9d3a-a670d49f8ded\") " pod="openstack/horizon-7cdf854644-xbv6p" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.793905 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22b0bf13-3251-446f-946b-273f89349427-config-data\") pod \"keystone-bootstrap-tlq7r\" (UID: \"22b0bf13-3251-446f-946b-273f89349427\") " pod="openstack/keystone-bootstrap-tlq7r" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.793929 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0f50192b-c5ae-418d-9d3a-a670d49f8ded-horizon-secret-key\") pod \"horizon-7cdf854644-xbv6p\" (UID: \"0f50192b-c5ae-418d-9d3a-a670d49f8ded\") " pod="openstack/horizon-7cdf854644-xbv6p" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.793946 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/22b0bf13-3251-446f-946b-273f89349427-credential-keys\") pod \"keystone-bootstrap-tlq7r\" (UID: \"22b0bf13-3251-446f-946b-273f89349427\") " pod="openstack/keystone-bootstrap-tlq7r" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.794706 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f50192b-c5ae-418d-9d3a-a670d49f8ded-logs\") pod \"horizon-7cdf854644-xbv6p\" (UID: \"0f50192b-c5ae-418d-9d3a-a670d49f8ded\") " pod="openstack/horizon-7cdf854644-xbv6p" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.795237 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0f50192b-c5ae-418d-9d3a-a670d49f8ded-scripts\") pod \"horizon-7cdf854644-xbv6p\" (UID: \"0f50192b-c5ae-418d-9d3a-a670d49f8ded\") " pod="openstack/horizon-7cdf854644-xbv6p" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.796297 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0f50192b-c5ae-418d-9d3a-a670d49f8ded-config-data\") pod \"horizon-7cdf854644-xbv6p\" (UID: \"0f50192b-c5ae-418d-9d3a-a670d49f8ded\") " pod="openstack/horizon-7cdf854644-xbv6p" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.798423 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f50192b-c5ae-418d-9d3a-a670d49f8ded-horizon-tls-certs\") pod \"horizon-7cdf854644-xbv6p\" (UID: \"0f50192b-c5ae-418d-9d3a-a670d49f8ded\") " pod="openstack/horizon-7cdf854644-xbv6p" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.802405 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f50192b-c5ae-418d-9d3a-a670d49f8ded-combined-ca-bundle\") pod \"horizon-7cdf854644-xbv6p\" (UID: \"0f50192b-c5ae-418d-9d3a-a670d49f8ded\") " pod="openstack/horizon-7cdf854644-xbv6p" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.806754 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0f50192b-c5ae-418d-9d3a-a670d49f8ded-horizon-secret-key\") pod \"horizon-7cdf854644-xbv6p\" (UID: \"0f50192b-c5ae-418d-9d3a-a670d49f8ded\") " pod="openstack/horizon-7cdf854644-xbv6p" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.819013 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkgwd\" (UniqueName: \"kubernetes.io/projected/0f50192b-c5ae-418d-9d3a-a670d49f8ded-kube-api-access-vkgwd\") pod \"horizon-7cdf854644-xbv6p\" (UID: \"0f50192b-c5ae-418d-9d3a-a670d49f8ded\") " pod="openstack/horizon-7cdf854644-xbv6p" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.895269 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22b0bf13-3251-446f-946b-273f89349427-config-data\") pod \"keystone-bootstrap-tlq7r\" (UID: \"22b0bf13-3251-446f-946b-273f89349427\") " pod="openstack/keystone-bootstrap-tlq7r" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.895333 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/22b0bf13-3251-446f-946b-273f89349427-credential-keys\") pod \"keystone-bootstrap-tlq7r\" (UID: \"22b0bf13-3251-446f-946b-273f89349427\") " pod="openstack/keystone-bootstrap-tlq7r" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.895454 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22b0bf13-3251-446f-946b-273f89349427-combined-ca-bundle\") pod \"keystone-bootstrap-tlq7r\" (UID: \"22b0bf13-3251-446f-946b-273f89349427\") " pod="openstack/keystone-bootstrap-tlq7r" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.895484 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64w96\" (UniqueName: \"kubernetes.io/projected/22b0bf13-3251-446f-946b-273f89349427-kube-api-access-64w96\") pod \"keystone-bootstrap-tlq7r\" (UID: \"22b0bf13-3251-446f-946b-273f89349427\") " pod="openstack/keystone-bootstrap-tlq7r" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.895505 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/22b0bf13-3251-446f-946b-273f89349427-fernet-keys\") pod \"keystone-bootstrap-tlq7r\" (UID: \"22b0bf13-3251-446f-946b-273f89349427\") " pod="openstack/keystone-bootstrap-tlq7r" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.895536 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22b0bf13-3251-446f-946b-273f89349427-scripts\") pod \"keystone-bootstrap-tlq7r\" (UID: \"22b0bf13-3251-446f-946b-273f89349427\") " pod="openstack/keystone-bootstrap-tlq7r" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.901075 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22b0bf13-3251-446f-946b-273f89349427-combined-ca-bundle\") pod \"keystone-bootstrap-tlq7r\" (UID: \"22b0bf13-3251-446f-946b-273f89349427\") " pod="openstack/keystone-bootstrap-tlq7r" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.902177 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22b0bf13-3251-446f-946b-273f89349427-scripts\") pod \"keystone-bootstrap-tlq7r\" (UID: \"22b0bf13-3251-446f-946b-273f89349427\") " pod="openstack/keystone-bootstrap-tlq7r" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.902734 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/22b0bf13-3251-446f-946b-273f89349427-fernet-keys\") pod \"keystone-bootstrap-tlq7r\" (UID: \"22b0bf13-3251-446f-946b-273f89349427\") " pod="openstack/keystone-bootstrap-tlq7r" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.903250 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/22b0bf13-3251-446f-946b-273f89349427-credential-keys\") pod \"keystone-bootstrap-tlq7r\" (UID: \"22b0bf13-3251-446f-946b-273f89349427\") " pod="openstack/keystone-bootstrap-tlq7r" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.903801 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22b0bf13-3251-446f-946b-273f89349427-config-data\") pod \"keystone-bootstrap-tlq7r\" (UID: \"22b0bf13-3251-446f-946b-273f89349427\") " pod="openstack/keystone-bootstrap-tlq7r" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.924127 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64w96\" (UniqueName: \"kubernetes.io/projected/22b0bf13-3251-446f-946b-273f89349427-kube-api-access-64w96\") pod \"keystone-bootstrap-tlq7r\" (UID: \"22b0bf13-3251-446f-946b-273f89349427\") " pod="openstack/keystone-bootstrap-tlq7r" Oct 14 13:32:19 crc kubenswrapper[4725]: I1014 13:32:19.937130 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16644f47-970c-4ed8-b44d-04a8f4765a63" path="/var/lib/kubelet/pods/16644f47-970c-4ed8-b44d-04a8f4765a63/volumes" Oct 14 13:32:20 crc kubenswrapper[4725]: I1014 13:32:20.041198 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cdf854644-xbv6p" Oct 14 13:32:20 crc kubenswrapper[4725]: I1014 13:32:20.052267 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tlq7r" Oct 14 13:32:20 crc kubenswrapper[4725]: I1014 13:32:20.390234 4725 generic.go:334] "Generic (PLEG): container finished" podID="4dfca925-34cf-487b-a4ad-5a16a8f24b65" containerID="4f89fdc5517f31dcbd79fff79e47536fe39014c4c380886c42baf9a737b3885c" exitCode=0 Oct 14 13:32:20 crc kubenswrapper[4725]: I1014 13:32:20.390304 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-kqvmq" event={"ID":"4dfca925-34cf-487b-a4ad-5a16a8f24b65","Type":"ContainerDied","Data":"4f89fdc5517f31dcbd79fff79e47536fe39014c4c380886c42baf9a737b3885c"} Oct 14 13:32:20 crc kubenswrapper[4725]: I1014 13:32:20.393499 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3ffcd111-cd30-4d4e-933f-5c82abf7de93","Type":"ContainerStarted","Data":"6eb6ad83bc4cc4da3ee0f8560a09d6dbd5860139c4f02479cf960e0fe8528476"} Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.203531 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.213592 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-kqvmq" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.339805 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dfca925-34cf-487b-a4ad-5a16a8f24b65-ovsdbserver-sb\") pod \"4dfca925-34cf-487b-a4ad-5a16a8f24b65\" (UID: \"4dfca925-34cf-487b-a4ad-5a16a8f24b65\") " Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.339895 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dfca925-34cf-487b-a4ad-5a16a8f24b65-ovsdbserver-nb\") pod \"4dfca925-34cf-487b-a4ad-5a16a8f24b65\" (UID: \"4dfca925-34cf-487b-a4ad-5a16a8f24b65\") " Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.339955 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr2j8\" (UniqueName: \"kubernetes.io/projected/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-kube-api-access-hr2j8\") pod \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\" (UID: \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\") " Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.340018 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c88n6\" (UniqueName: \"kubernetes.io/projected/4dfca925-34cf-487b-a4ad-5a16a8f24b65-kube-api-access-c88n6\") pod \"4dfca925-34cf-487b-a4ad-5a16a8f24b65\" (UID: \"4dfca925-34cf-487b-a4ad-5a16a8f24b65\") " Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.340099 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-httpd-run\") pod \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\" (UID: \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\") " Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.340119 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-logs\") pod \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\" (UID: \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\") " Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.340148 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dfca925-34cf-487b-a4ad-5a16a8f24b65-config\") pod \"4dfca925-34cf-487b-a4ad-5a16a8f24b65\" (UID: \"4dfca925-34cf-487b-a4ad-5a16a8f24b65\") " Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.340221 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\" (UID: \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\") " Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.340269 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dfca925-34cf-487b-a4ad-5a16a8f24b65-dns-svc\") pod \"4dfca925-34cf-487b-a4ad-5a16a8f24b65\" (UID: \"4dfca925-34cf-487b-a4ad-5a16a8f24b65\") " Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.340294 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-combined-ca-bundle\") pod \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\" (UID: \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\") " Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.340314 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-scripts\") pod \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\" (UID: \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\") " Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.340353 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-config-data\") pod \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\" (UID: \"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5\") " Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.340615 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5" (UID: "8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.340861 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-logs" (OuterVolumeSpecName: "logs") pod "8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5" (UID: "8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.341240 4725 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.344553 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-scripts" (OuterVolumeSpecName: "scripts") pod "8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5" (UID: "8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.344573 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dfca925-34cf-487b-a4ad-5a16a8f24b65-kube-api-access-c88n6" (OuterVolumeSpecName: "kube-api-access-c88n6") pod "4dfca925-34cf-487b-a4ad-5a16a8f24b65" (UID: "4dfca925-34cf-487b-a4ad-5a16a8f24b65"). InnerVolumeSpecName "kube-api-access-c88n6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.344636 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5" (UID: "8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.358725 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-kube-api-access-hr2j8" (OuterVolumeSpecName: "kube-api-access-hr2j8") pod "8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5" (UID: "8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5"). InnerVolumeSpecName "kube-api-access-hr2j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.367213 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5" (UID: "8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.386434 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dfca925-34cf-487b-a4ad-5a16a8f24b65-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4dfca925-34cf-487b-a4ad-5a16a8f24b65" (UID: "4dfca925-34cf-487b-a4ad-5a16a8f24b65"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.387442 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dfca925-34cf-487b-a4ad-5a16a8f24b65-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4dfca925-34cf-487b-a4ad-5a16a8f24b65" (UID: "4dfca925-34cf-487b-a4ad-5a16a8f24b65"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.398713 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dfca925-34cf-487b-a4ad-5a16a8f24b65-config" (OuterVolumeSpecName: "config") pod "4dfca925-34cf-487b-a4ad-5a16a8f24b65" (UID: "4dfca925-34cf-487b-a4ad-5a16a8f24b65"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.398938 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dfca925-34cf-487b-a4ad-5a16a8f24b65-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4dfca925-34cf-487b-a4ad-5a16a8f24b65" (UID: "4dfca925-34cf-487b-a4ad-5a16a8f24b65"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.425415 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.425414 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5","Type":"ContainerDied","Data":"4be75e1ba730bd30a84a41c7b23bd16014914c389c8c554106b4267a5e012d0b"} Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.425597 4725 scope.go:117] "RemoveContainer" containerID="c54365e21dec2b85fc1dd23f52e73d5be1265474d48de1cb876b46d06d305f4e" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.428421 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-kqvmq" event={"ID":"4dfca925-34cf-487b-a4ad-5a16a8f24b65","Type":"ContainerDied","Data":"11f8c3b74deb0633cf1e93d0921358edf764e3694a51cebe2de9b36bbad48362"} Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.428955 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-kqvmq" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.434342 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-config-data" (OuterVolumeSpecName: "config-data") pod "8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5" (UID: "8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.454525 4725 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.454606 4725 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dfca925-34cf-487b-a4ad-5a16a8f24b65-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.454616 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.454633 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.454642 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.454651 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dfca925-34cf-487b-a4ad-5a16a8f24b65-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.454662 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dfca925-34cf-487b-a4ad-5a16a8f24b65-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.454674 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr2j8\" (UniqueName: \"kubernetes.io/projected/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-kube-api-access-hr2j8\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.454683 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c88n6\" (UniqueName: \"kubernetes.io/projected/4dfca925-34cf-487b-a4ad-5a16a8f24b65-kube-api-access-c88n6\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.454691 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5-logs\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.454704 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dfca925-34cf-487b-a4ad-5a16a8f24b65-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.475883 4725 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.520105 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-kqvmq"] Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.525847 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-kqvmq"] Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.556512 4725 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.760384 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.775231 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.784636 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 13:32:22 crc kubenswrapper[4725]: E1014 13:32:22.785189 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dfca925-34cf-487b-a4ad-5a16a8f24b65" containerName="dnsmasq-dns" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.785211 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dfca925-34cf-487b-a4ad-5a16a8f24b65" containerName="dnsmasq-dns" Oct 14 13:32:22 crc kubenswrapper[4725]: E1014 13:32:22.785227 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dfca925-34cf-487b-a4ad-5a16a8f24b65" containerName="init" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.785234 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dfca925-34cf-487b-a4ad-5a16a8f24b65" containerName="init" Oct 14 13:32:22 crc kubenswrapper[4725]: E1014 13:32:22.785271 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5" containerName="glance-log" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.785277 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5" containerName="glance-log" Oct 14 13:32:22 crc kubenswrapper[4725]: E1014 13:32:22.785293 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5" containerName="glance-httpd" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.785299 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5" containerName="glance-httpd" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.785945 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5" containerName="glance-httpd" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.785973 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dfca925-34cf-487b-a4ad-5a16a8f24b65" containerName="dnsmasq-dns" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.785983 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5" containerName="glance-log" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.787322 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.789574 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.789887 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.803123 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.964440 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq5lv\" (UniqueName: \"kubernetes.io/projected/5d89387e-949e-49c8-b6a1-543aaa1a02d5-kube-api-access-zq5lv\") pod \"glance-default-external-api-0\" (UID: \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.964754 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.964939 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d89387e-949e-49c8-b6a1-543aaa1a02d5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.965070 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d89387e-949e-49c8-b6a1-543aaa1a02d5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.965256 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d89387e-949e-49c8-b6a1-543aaa1a02d5-config-data\") pod \"glance-default-external-api-0\" (UID: \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.965407 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d89387e-949e-49c8-b6a1-543aaa1a02d5-logs\") pod \"glance-default-external-api-0\" (UID: \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.965547 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d89387e-949e-49c8-b6a1-543aaa1a02d5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:22 crc kubenswrapper[4725]: I1014 13:32:22.965842 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d89387e-949e-49c8-b6a1-543aaa1a02d5-scripts\") pod \"glance-default-external-api-0\" (UID: \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:23 crc kubenswrapper[4725]: I1014 13:32:23.067971 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d89387e-949e-49c8-b6a1-543aaa1a02d5-logs\") pod \"glance-default-external-api-0\" (UID: \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:23 crc kubenswrapper[4725]: I1014 13:32:23.068070 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d89387e-949e-49c8-b6a1-543aaa1a02d5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:23 crc kubenswrapper[4725]: I1014 13:32:23.068225 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d89387e-949e-49c8-b6a1-543aaa1a02d5-scripts\") pod \"glance-default-external-api-0\" (UID: \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:23 crc kubenswrapper[4725]: I1014 13:32:23.068260 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq5lv\" (UniqueName: \"kubernetes.io/projected/5d89387e-949e-49c8-b6a1-543aaa1a02d5-kube-api-access-zq5lv\") pod \"glance-default-external-api-0\" (UID: \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:23 crc kubenswrapper[4725]: I1014 13:32:23.069007 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:23 crc kubenswrapper[4725]: I1014 13:32:23.069101 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d89387e-949e-49c8-b6a1-543aaa1a02d5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:23 crc kubenswrapper[4725]: I1014 13:32:23.069138 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d89387e-949e-49c8-b6a1-543aaa1a02d5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:23 crc kubenswrapper[4725]: I1014 13:32:23.069248 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d89387e-949e-49c8-b6a1-543aaa1a02d5-config-data\") pod \"glance-default-external-api-0\" (UID: \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:23 crc kubenswrapper[4725]: I1014 13:32:23.068477 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d89387e-949e-49c8-b6a1-543aaa1a02d5-logs\") pod \"glance-default-external-api-0\" (UID: \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:23 crc kubenswrapper[4725]: I1014 13:32:23.070353 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Oct 14 13:32:23 crc kubenswrapper[4725]: I1014 13:32:23.071092 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d89387e-949e-49c8-b6a1-543aaa1a02d5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:23 crc kubenswrapper[4725]: I1014 13:32:23.076250 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d89387e-949e-49c8-b6a1-543aaa1a02d5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:23 crc kubenswrapper[4725]: I1014 13:32:23.076442 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d89387e-949e-49c8-b6a1-543aaa1a02d5-scripts\") pod \"glance-default-external-api-0\" (UID: \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:23 crc kubenswrapper[4725]: I1014 13:32:23.076543 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d89387e-949e-49c8-b6a1-543aaa1a02d5-config-data\") pod \"glance-default-external-api-0\" (UID: \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:23 crc kubenswrapper[4725]: I1014 13:32:23.087019 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq5lv\" (UniqueName: \"kubernetes.io/projected/5d89387e-949e-49c8-b6a1-543aaa1a02d5-kube-api-access-zq5lv\") pod \"glance-default-external-api-0\" (UID: \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:23 crc kubenswrapper[4725]: I1014 13:32:23.088384 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d89387e-949e-49c8-b6a1-543aaa1a02d5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:23 crc kubenswrapper[4725]: I1014 13:32:23.094681 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\") " pod="openstack/glance-default-external-api-0" Oct 14 13:32:23 crc kubenswrapper[4725]: I1014 13:32:23.124221 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 13:32:23 crc kubenswrapper[4725]: I1014 13:32:23.945252 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dfca925-34cf-487b-a4ad-5a16a8f24b65" path="/var/lib/kubelet/pods/4dfca925-34cf-487b-a4ad-5a16a8f24b65/volumes" Oct 14 13:32:23 crc kubenswrapper[4725]: I1014 13:32:23.946952 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5" path="/var/lib/kubelet/pods/8fbbbffe-8e57-422a-bc9d-f61b07e7c3a5/volumes" Oct 14 13:32:25 crc kubenswrapper[4725]: I1014 13:32:25.532129 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-kqvmq" podUID="4dfca925-34cf-487b-a4ad-5a16a8f24b65" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: i/o timeout" Oct 14 13:32:26 crc kubenswrapper[4725]: E1014 13:32:26.817875 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Oct 14 13:32:26 crc kubenswrapper[4725]: E1014 13:32:26.818348 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n76hfdh659h664h59h5cbh5c4h6fh59bh5c7h94hd4hb9h5d4h679h5bchfch659h656h66fh696hfbh8dhd4h587h5c4h586h57h589h566hf4h59dq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p2xjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-684dcc5995-mbkf4_openstack(486ea1cf-7023-407b-bb7f-c67ce34c42aa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 13:32:26 crc kubenswrapper[4725]: E1014 13:32:26.839570 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-684dcc5995-mbkf4" podUID="486ea1cf-7023-407b-bb7f-c67ce34c42aa" Oct 14 13:32:26 crc kubenswrapper[4725]: E1014 13:32:26.866355 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Oct 14 13:32:26 crc kubenswrapper[4725]: E1014 13:32:26.866534 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n656h5fh5ffh598h9ch544h556h568h674h579h86hc6h689hd9h58fh59dhcbh677hf4h54fh567h64fhcdh5c8hch54ch57fhdbh56dh58fhb7h68fq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brqf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6f49767877-s5fb5_openstack(6316dc0c-436a-4cb3-86ae-0d073a62980e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 13:32:26 crc kubenswrapper[4725]: E1014 13:32:26.868726 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6f49767877-s5fb5" podUID="6316dc0c-436a-4cb3-86ae-0d073a62980e" Oct 14 13:32:35 crc kubenswrapper[4725]: E1014 13:32:35.380200 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Oct 14 13:32:35 crc kubenswrapper[4725]: E1014 13:32:35.380876 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8bh95hb5h7fhffh54ch56h5c9h5bch656h644h96h5dbh546h54h595h654h668h555hb6h5f6h56dh64dh696h6bh67h5dbh65ch66fh6hf5h85q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8wf6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-65fbf987c-h98xp_openstack(f775e4a2-1844-4b66-ada3-15ce61e06ed7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 13:32:35 crc kubenswrapper[4725]: E1014 13:32:35.383321 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-65fbf987c-h98xp" podUID="f775e4a2-1844-4b66-ada3-15ce61e06ed7" Oct 14 13:32:35 crc kubenswrapper[4725]: I1014 13:32:35.528400 4725 generic.go:334] "Generic (PLEG): container finished" podID="9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0" containerID="9b0a728ed29905664ea6e13e6214b56b36b8af5b20176fbc4a20a1123e1ddfdf" exitCode=0 Oct 14 13:32:35 crc kubenswrapper[4725]: I1014 13:32:35.528805 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5sg8k" event={"ID":"9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0","Type":"ContainerDied","Data":"9b0a728ed29905664ea6e13e6214b56b36b8af5b20176fbc4a20a1123e1ddfdf"} Oct 14 13:32:36 crc kubenswrapper[4725]: I1014 13:32:36.746120 4725 scope.go:117] "RemoveContainer" containerID="87d1af5334ddcb12b15a351243fe780f7d76938c4d41bf5ec075198137890a83" Oct 14 13:32:36 crc kubenswrapper[4725]: I1014 13:32:36.849711 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-684dcc5995-mbkf4" Oct 14 13:32:36 crc kubenswrapper[4725]: I1014 13:32:36.865957 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f49767877-s5fb5" Oct 14 13:32:36 crc kubenswrapper[4725]: I1014 13:32:36.867826 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6316dc0c-436a-4cb3-86ae-0d073a62980e-scripts\") pod \"6316dc0c-436a-4cb3-86ae-0d073a62980e\" (UID: \"6316dc0c-436a-4cb3-86ae-0d073a62980e\") " Oct 14 13:32:36 crc kubenswrapper[4725]: I1014 13:32:36.867887 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6316dc0c-436a-4cb3-86ae-0d073a62980e-config-data\") pod \"6316dc0c-436a-4cb3-86ae-0d073a62980e\" (UID: \"6316dc0c-436a-4cb3-86ae-0d073a62980e\") " Oct 14 13:32:36 crc kubenswrapper[4725]: I1014 13:32:36.867995 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/486ea1cf-7023-407b-bb7f-c67ce34c42aa-logs\") pod \"486ea1cf-7023-407b-bb7f-c67ce34c42aa\" (UID: \"486ea1cf-7023-407b-bb7f-c67ce34c42aa\") " Oct 14 13:32:36 crc kubenswrapper[4725]: I1014 13:32:36.868044 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6316dc0c-436a-4cb3-86ae-0d073a62980e-horizon-secret-key\") pod \"6316dc0c-436a-4cb3-86ae-0d073a62980e\" (UID: \"6316dc0c-436a-4cb3-86ae-0d073a62980e\") " Oct 14 13:32:36 crc kubenswrapper[4725]: I1014 13:32:36.868282 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2xjc\" (UniqueName: \"kubernetes.io/projected/486ea1cf-7023-407b-bb7f-c67ce34c42aa-kube-api-access-p2xjc\") pod \"486ea1cf-7023-407b-bb7f-c67ce34c42aa\" (UID: \"486ea1cf-7023-407b-bb7f-c67ce34c42aa\") " Oct 14 13:32:36 crc kubenswrapper[4725]: I1014 13:32:36.868917 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/486ea1cf-7023-407b-bb7f-c67ce34c42aa-logs" (OuterVolumeSpecName: "logs") pod "486ea1cf-7023-407b-bb7f-c67ce34c42aa" (UID: "486ea1cf-7023-407b-bb7f-c67ce34c42aa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:32:36 crc kubenswrapper[4725]: I1014 13:32:36.869337 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6316dc0c-436a-4cb3-86ae-0d073a62980e-config-data" (OuterVolumeSpecName: "config-data") pod "6316dc0c-436a-4cb3-86ae-0d073a62980e" (UID: "6316dc0c-436a-4cb3-86ae-0d073a62980e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:32:36 crc kubenswrapper[4725]: I1014 13:32:36.873295 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6316dc0c-436a-4cb3-86ae-0d073a62980e-scripts" (OuterVolumeSpecName: "scripts") pod "6316dc0c-436a-4cb3-86ae-0d073a62980e" (UID: "6316dc0c-436a-4cb3-86ae-0d073a62980e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:32:36 crc kubenswrapper[4725]: I1014 13:32:36.876028 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6316dc0c-436a-4cb3-86ae-0d073a62980e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "6316dc0c-436a-4cb3-86ae-0d073a62980e" (UID: "6316dc0c-436a-4cb3-86ae-0d073a62980e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:36 crc kubenswrapper[4725]: I1014 13:32:36.876614 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/486ea1cf-7023-407b-bb7f-c67ce34c42aa-kube-api-access-p2xjc" (OuterVolumeSpecName: "kube-api-access-p2xjc") pod "486ea1cf-7023-407b-bb7f-c67ce34c42aa" (UID: "486ea1cf-7023-407b-bb7f-c67ce34c42aa"). InnerVolumeSpecName "kube-api-access-p2xjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:32:36 crc kubenswrapper[4725]: I1014 13:32:36.969303 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/486ea1cf-7023-407b-bb7f-c67ce34c42aa-horizon-secret-key\") pod \"486ea1cf-7023-407b-bb7f-c67ce34c42aa\" (UID: \"486ea1cf-7023-407b-bb7f-c67ce34c42aa\") " Oct 14 13:32:36 crc kubenswrapper[4725]: I1014 13:32:36.969679 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6316dc0c-436a-4cb3-86ae-0d073a62980e-logs\") pod \"6316dc0c-436a-4cb3-86ae-0d073a62980e\" (UID: \"6316dc0c-436a-4cb3-86ae-0d073a62980e\") " Oct 14 13:32:36 crc kubenswrapper[4725]: I1014 13:32:36.969717 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brqf2\" (UniqueName: \"kubernetes.io/projected/6316dc0c-436a-4cb3-86ae-0d073a62980e-kube-api-access-brqf2\") pod \"6316dc0c-436a-4cb3-86ae-0d073a62980e\" (UID: \"6316dc0c-436a-4cb3-86ae-0d073a62980e\") " Oct 14 13:32:36 crc kubenswrapper[4725]: I1014 13:32:36.969740 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/486ea1cf-7023-407b-bb7f-c67ce34c42aa-scripts\") pod \"486ea1cf-7023-407b-bb7f-c67ce34c42aa\" (UID: \"486ea1cf-7023-407b-bb7f-c67ce34c42aa\") " Oct 14 13:32:36 crc kubenswrapper[4725]: I1014 13:32:36.969781 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/486ea1cf-7023-407b-bb7f-c67ce34c42aa-config-data\") pod \"486ea1cf-7023-407b-bb7f-c67ce34c42aa\" (UID: \"486ea1cf-7023-407b-bb7f-c67ce34c42aa\") " Oct 14 13:32:36 crc kubenswrapper[4725]: I1014 13:32:36.970127 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6316dc0c-436a-4cb3-86ae-0d073a62980e-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:36 crc kubenswrapper[4725]: I1014 13:32:36.970143 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6316dc0c-436a-4cb3-86ae-0d073a62980e-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:36 crc kubenswrapper[4725]: I1014 13:32:36.970156 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/486ea1cf-7023-407b-bb7f-c67ce34c42aa-logs\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:36 crc kubenswrapper[4725]: I1014 13:32:36.970168 4725 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6316dc0c-436a-4cb3-86ae-0d073a62980e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:36 crc kubenswrapper[4725]: I1014 13:32:36.970181 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2xjc\" (UniqueName: \"kubernetes.io/projected/486ea1cf-7023-407b-bb7f-c67ce34c42aa-kube-api-access-p2xjc\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:36 crc kubenswrapper[4725]: I1014 13:32:36.970752 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6316dc0c-436a-4cb3-86ae-0d073a62980e-logs" (OuterVolumeSpecName: "logs") pod "6316dc0c-436a-4cb3-86ae-0d073a62980e" (UID: "6316dc0c-436a-4cb3-86ae-0d073a62980e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:32:36 crc kubenswrapper[4725]: I1014 13:32:36.971246 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/486ea1cf-7023-407b-bb7f-c67ce34c42aa-scripts" (OuterVolumeSpecName: "scripts") pod "486ea1cf-7023-407b-bb7f-c67ce34c42aa" (UID: "486ea1cf-7023-407b-bb7f-c67ce34c42aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:32:36 crc kubenswrapper[4725]: I1014 13:32:36.971505 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/486ea1cf-7023-407b-bb7f-c67ce34c42aa-config-data" (OuterVolumeSpecName: "config-data") pod "486ea1cf-7023-407b-bb7f-c67ce34c42aa" (UID: "486ea1cf-7023-407b-bb7f-c67ce34c42aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:32:36 crc kubenswrapper[4725]: I1014 13:32:36.974704 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/486ea1cf-7023-407b-bb7f-c67ce34c42aa-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "486ea1cf-7023-407b-bb7f-c67ce34c42aa" (UID: "486ea1cf-7023-407b-bb7f-c67ce34c42aa"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:36 crc kubenswrapper[4725]: I1014 13:32:36.975781 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6316dc0c-436a-4cb3-86ae-0d073a62980e-kube-api-access-brqf2" (OuterVolumeSpecName: "kube-api-access-brqf2") pod "6316dc0c-436a-4cb3-86ae-0d073a62980e" (UID: "6316dc0c-436a-4cb3-86ae-0d073a62980e"). InnerVolumeSpecName "kube-api-access-brqf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:32:37 crc kubenswrapper[4725]: I1014 13:32:37.070809 4725 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/486ea1cf-7023-407b-bb7f-c67ce34c42aa-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:37 crc kubenswrapper[4725]: I1014 13:32:37.070855 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6316dc0c-436a-4cb3-86ae-0d073a62980e-logs\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:37 crc kubenswrapper[4725]: I1014 13:32:37.070865 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/486ea1cf-7023-407b-bb7f-c67ce34c42aa-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:37 crc kubenswrapper[4725]: I1014 13:32:37.070875 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brqf2\" (UniqueName: \"kubernetes.io/projected/6316dc0c-436a-4cb3-86ae-0d073a62980e-kube-api-access-brqf2\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:37 crc kubenswrapper[4725]: I1014 13:32:37.070885 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/486ea1cf-7023-407b-bb7f-c67ce34c42aa-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:37 crc kubenswrapper[4725]: I1014 13:32:37.193150 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-9d5c84b44-vssnm"] Oct 14 13:32:37 crc kubenswrapper[4725]: I1014 13:32:37.556490 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f49767877-s5fb5" event={"ID":"6316dc0c-436a-4cb3-86ae-0d073a62980e","Type":"ContainerDied","Data":"f353098c44c9ac2b7e2e8c0e4304c550fae7b297bc43ed599cf9a0b4d792aeb4"} Oct 14 13:32:37 crc kubenswrapper[4725]: I1014 13:32:37.556522 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f49767877-s5fb5" Oct 14 13:32:37 crc kubenswrapper[4725]: I1014 13:32:37.562190 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-684dcc5995-mbkf4" event={"ID":"486ea1cf-7023-407b-bb7f-c67ce34c42aa","Type":"ContainerDied","Data":"299a5a505d7775e2108d651a5a669524cb64aa3a32315966f119294e786cc33a"} Oct 14 13:32:37 crc kubenswrapper[4725]: I1014 13:32:37.562250 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-684dcc5995-mbkf4" Oct 14 13:32:37 crc kubenswrapper[4725]: I1014 13:32:37.628403 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f49767877-s5fb5"] Oct 14 13:32:37 crc kubenswrapper[4725]: I1014 13:32:37.639358 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6f49767877-s5fb5"] Oct 14 13:32:37 crc kubenswrapper[4725]: I1014 13:32:37.670098 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-684dcc5995-mbkf4"] Oct 14 13:32:37 crc kubenswrapper[4725]: I1014 13:32:37.677649 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-684dcc5995-mbkf4"] Oct 14 13:32:37 crc kubenswrapper[4725]: I1014 13:32:37.938895 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="486ea1cf-7023-407b-bb7f-c67ce34c42aa" path="/var/lib/kubelet/pods/486ea1cf-7023-407b-bb7f-c67ce34c42aa/volumes" Oct 14 13:32:37 crc kubenswrapper[4725]: I1014 13:32:37.939646 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6316dc0c-436a-4cb3-86ae-0d073a62980e" path="/var/lib/kubelet/pods/6316dc0c-436a-4cb3-86ae-0d073a62980e/volumes" Oct 14 13:32:38 crc kubenswrapper[4725]: E1014 13:32:38.231086 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Oct 14 13:32:38 crc kubenswrapper[4725]: E1014 13:32:38.231631 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lhptj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-gzbkx_openstack(ec415043-bd33-4ab3-8437-28eda0458656): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 13:32:38 crc kubenswrapper[4725]: E1014 13:32:38.232823 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-gzbkx" podUID="ec415043-bd33-4ab3-8437-28eda0458656" Oct 14 13:32:38 crc kubenswrapper[4725]: E1014 13:32:38.573838 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-gzbkx" podUID="ec415043-bd33-4ab3-8437-28eda0458656" Oct 14 13:32:39 crc kubenswrapper[4725]: E1014 13:32:39.398075 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Oct 14 13:32:39 crc kubenswrapper[4725]: E1014 13:32:39.399309 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xbkkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-rwcgw_openstack(1f19eb4e-4359-4092-a050-e1d695fbb891): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 13:32:39 crc kubenswrapper[4725]: E1014 13:32:39.402844 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-rwcgw" podUID="1f19eb4e-4359-4092-a050-e1d695fbb891" Oct 14 13:32:39 crc kubenswrapper[4725]: I1014 13:32:39.478885 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65fbf987c-h98xp" Oct 14 13:32:39 crc kubenswrapper[4725]: I1014 13:32:39.593328 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65fbf987c-h98xp" event={"ID":"f775e4a2-1844-4b66-ada3-15ce61e06ed7","Type":"ContainerDied","Data":"4ebef32560a377b172824a18f83cda034f0431e3eaf5803d6fbaa72d02ffbbb2"} Oct 14 13:32:39 crc kubenswrapper[4725]: E1014 13:32:39.594826 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-rwcgw" podUID="1f19eb4e-4359-4092-a050-e1d695fbb891" Oct 14 13:32:39 crc kubenswrapper[4725]: I1014 13:32:39.595022 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65fbf987c-h98xp" Oct 14 13:32:39 crc kubenswrapper[4725]: I1014 13:32:39.610093 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f775e4a2-1844-4b66-ada3-15ce61e06ed7-config-data\") pod \"f775e4a2-1844-4b66-ada3-15ce61e06ed7\" (UID: \"f775e4a2-1844-4b66-ada3-15ce61e06ed7\") " Oct 14 13:32:39 crc kubenswrapper[4725]: I1014 13:32:39.610176 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wf6h\" (UniqueName: \"kubernetes.io/projected/f775e4a2-1844-4b66-ada3-15ce61e06ed7-kube-api-access-8wf6h\") pod \"f775e4a2-1844-4b66-ada3-15ce61e06ed7\" (UID: \"f775e4a2-1844-4b66-ada3-15ce61e06ed7\") " Oct 14 13:32:39 crc kubenswrapper[4725]: I1014 13:32:39.610260 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f775e4a2-1844-4b66-ada3-15ce61e06ed7-scripts\") pod \"f775e4a2-1844-4b66-ada3-15ce61e06ed7\" (UID: \"f775e4a2-1844-4b66-ada3-15ce61e06ed7\") " Oct 14 13:32:39 crc kubenswrapper[4725]: I1014 13:32:39.610288 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f775e4a2-1844-4b66-ada3-15ce61e06ed7-horizon-secret-key\") pod \"f775e4a2-1844-4b66-ada3-15ce61e06ed7\" (UID: \"f775e4a2-1844-4b66-ada3-15ce61e06ed7\") " Oct 14 13:32:39 crc kubenswrapper[4725]: I1014 13:32:39.610361 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f775e4a2-1844-4b66-ada3-15ce61e06ed7-logs\") pod \"f775e4a2-1844-4b66-ada3-15ce61e06ed7\" (UID: \"f775e4a2-1844-4b66-ada3-15ce61e06ed7\") " Oct 14 13:32:39 crc kubenswrapper[4725]: I1014 13:32:39.610908 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f775e4a2-1844-4b66-ada3-15ce61e06ed7-logs" (OuterVolumeSpecName: "logs") pod "f775e4a2-1844-4b66-ada3-15ce61e06ed7" (UID: "f775e4a2-1844-4b66-ada3-15ce61e06ed7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:32:39 crc kubenswrapper[4725]: I1014 13:32:39.611022 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f775e4a2-1844-4b66-ada3-15ce61e06ed7-scripts" (OuterVolumeSpecName: "scripts") pod "f775e4a2-1844-4b66-ada3-15ce61e06ed7" (UID: "f775e4a2-1844-4b66-ada3-15ce61e06ed7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:32:39 crc kubenswrapper[4725]: I1014 13:32:39.611244 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f775e4a2-1844-4b66-ada3-15ce61e06ed7-config-data" (OuterVolumeSpecName: "config-data") pod "f775e4a2-1844-4b66-ada3-15ce61e06ed7" (UID: "f775e4a2-1844-4b66-ada3-15ce61e06ed7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:32:39 crc kubenswrapper[4725]: I1014 13:32:39.611495 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f775e4a2-1844-4b66-ada3-15ce61e06ed7-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:39 crc kubenswrapper[4725]: I1014 13:32:39.611516 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f775e4a2-1844-4b66-ada3-15ce61e06ed7-logs\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:39 crc kubenswrapper[4725]: I1014 13:32:39.611525 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f775e4a2-1844-4b66-ada3-15ce61e06ed7-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:39 crc kubenswrapper[4725]: I1014 13:32:39.614921 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f775e4a2-1844-4b66-ada3-15ce61e06ed7-kube-api-access-8wf6h" (OuterVolumeSpecName: "kube-api-access-8wf6h") pod "f775e4a2-1844-4b66-ada3-15ce61e06ed7" (UID: "f775e4a2-1844-4b66-ada3-15ce61e06ed7"). InnerVolumeSpecName "kube-api-access-8wf6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:32:39 crc kubenswrapper[4725]: I1014 13:32:39.615607 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f775e4a2-1844-4b66-ada3-15ce61e06ed7-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f775e4a2-1844-4b66-ada3-15ce61e06ed7" (UID: "f775e4a2-1844-4b66-ada3-15ce61e06ed7"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:39 crc kubenswrapper[4725]: I1014 13:32:39.712678 4725 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f775e4a2-1844-4b66-ada3-15ce61e06ed7-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:39 crc kubenswrapper[4725]: I1014 13:32:39.712706 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wf6h\" (UniqueName: \"kubernetes.io/projected/f775e4a2-1844-4b66-ada3-15ce61e06ed7-kube-api-access-8wf6h\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:39 crc kubenswrapper[4725]: E1014 13:32:39.785605 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Oct 14 13:32:39 crc kubenswrapper[4725]: E1014 13:32:39.785765 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n585h644hc8h54dhc6h655h5dbh544h5d7h544h58bh5c9h647hd8hc7h657h9chf9h6h5bh58bh58fh5f6h679hddh5d7hcchcch75h596h67fh569q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mq7z4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(4bd1dc15-7e73-480b-9599-123a18602d5e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 13:32:39 crc kubenswrapper[4725]: I1014 13:32:39.830495 4725 scope.go:117] "RemoveContainer" containerID="4f89fdc5517f31dcbd79fff79e47536fe39014c4c380886c42baf9a737b3885c" Oct 14 13:32:39 crc kubenswrapper[4725]: I1014 13:32:39.841338 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5sg8k" Oct 14 13:32:39 crc kubenswrapper[4725]: I1014 13:32:39.918520 4725 scope.go:117] "RemoveContainer" containerID="14c109369e3be6f3172cf0a59123f0ddb647b1f34dec478eddbc0a0b1a8fb9ac" Oct 14 13:32:40 crc kubenswrapper[4725]: I1014 13:32:40.018071 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmhg6\" (UniqueName: \"kubernetes.io/projected/9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0-kube-api-access-hmhg6\") pod \"9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0\" (UID: \"9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0\") " Oct 14 13:32:40 crc kubenswrapper[4725]: I1014 13:32:40.018322 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0-combined-ca-bundle\") pod \"9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0\" (UID: \"9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0\") " Oct 14 13:32:40 crc kubenswrapper[4725]: I1014 13:32:40.018374 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0-config\") pod \"9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0\" (UID: \"9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0\") " Oct 14 13:32:40 crc kubenswrapper[4725]: I1014 13:32:40.038000 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0-kube-api-access-hmhg6" (OuterVolumeSpecName: "kube-api-access-hmhg6") pod "9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0" (UID: "9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0"). InnerVolumeSpecName "kube-api-access-hmhg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:32:40 crc kubenswrapper[4725]: I1014 13:32:40.058003 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-65fbf987c-h98xp"] Oct 14 13:32:40 crc kubenswrapper[4725]: I1014 13:32:40.059873 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0-config" (OuterVolumeSpecName: "config") pod "9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0" (UID: "9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:40 crc kubenswrapper[4725]: I1014 13:32:40.063259 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0" (UID: "9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:40 crc kubenswrapper[4725]: I1014 13:32:40.071336 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-65fbf987c-h98xp"] Oct 14 13:32:40 crc kubenswrapper[4725]: I1014 13:32:40.121217 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:40 crc kubenswrapper[4725]: I1014 13:32:40.121258 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:40 crc kubenswrapper[4725]: I1014 13:32:40.121269 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmhg6\" (UniqueName: \"kubernetes.io/projected/9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0-kube-api-access-hmhg6\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:40 crc kubenswrapper[4725]: I1014 13:32:40.298860 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7cdf854644-xbv6p"] Oct 14 13:32:40 crc kubenswrapper[4725]: W1014 13:32:40.320217 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f50192b_c5ae_418d_9d3a_a670d49f8ded.slice/crio-d08e42527354d20248fd329be4fe97bde0876b1f9006e253a54d395d4b03bee1 WatchSource:0}: Error finding container d08e42527354d20248fd329be4fe97bde0876b1f9006e253a54d395d4b03bee1: Status 404 returned error can't find the container with id d08e42527354d20248fd329be4fe97bde0876b1f9006e253a54d395d4b03bee1 Oct 14 13:32:40 crc kubenswrapper[4725]: I1014 13:32:40.374858 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tlq7r"] Oct 14 13:32:40 crc kubenswrapper[4725]: W1014 13:32:40.388230 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22b0bf13_3251_446f_946b_273f89349427.slice/crio-63ea4889fdd1cf7ea19670415274686ecefdd907f156d76b7fdf6e60e92f63df WatchSource:0}: Error finding container 63ea4889fdd1cf7ea19670415274686ecefdd907f156d76b7fdf6e60e92f63df: Status 404 returned error can't find the container with id 63ea4889fdd1cf7ea19670415274686ecefdd907f156d76b7fdf6e60e92f63df Oct 14 13:32:40 crc kubenswrapper[4725]: I1014 13:32:40.449880 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 13:32:40 crc kubenswrapper[4725]: W1014 13:32:40.450997 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d89387e_949e_49c8_b6a1_543aaa1a02d5.slice/crio-2021b478db781a1431816348c5d66db3476897b62c4a90d018812cee0dbf9e72 WatchSource:0}: Error finding container 2021b478db781a1431816348c5d66db3476897b62c4a90d018812cee0dbf9e72: Status 404 returned error can't find the container with id 2021b478db781a1431816348c5d66db3476897b62c4a90d018812cee0dbf9e72 Oct 14 13:32:40 crc kubenswrapper[4725]: I1014 13:32:40.615183 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5d89387e-949e-49c8-b6a1-543aaa1a02d5","Type":"ContainerStarted","Data":"2021b478db781a1431816348c5d66db3476897b62c4a90d018812cee0dbf9e72"} Oct 14 13:32:40 crc kubenswrapper[4725]: I1014 13:32:40.619329 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5sg8k" event={"ID":"9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0","Type":"ContainerDied","Data":"96bda005a59ddbfd6f2997b5616778b15e790a7515400e6fee9f6d25de5716fe"} Oct 14 13:32:40 crc kubenswrapper[4725]: I1014 13:32:40.619366 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96bda005a59ddbfd6f2997b5616778b15e790a7515400e6fee9f6d25de5716fe" Oct 14 13:32:40 crc kubenswrapper[4725]: I1014 13:32:40.619425 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5sg8k" Oct 14 13:32:40 crc kubenswrapper[4725]: I1014 13:32:40.632233 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d5c84b44-vssnm" event={"ID":"551500de-77a9-4f28-ab4b-b8259e04804b","Type":"ContainerStarted","Data":"12820d3bbd64cf010de74b038f5a42c61026b19b4e28f3044936bcaf135f4e48"} Oct 14 13:32:40 crc kubenswrapper[4725]: I1014 13:32:40.632277 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d5c84b44-vssnm" event={"ID":"551500de-77a9-4f28-ab4b-b8259e04804b","Type":"ContainerStarted","Data":"6e0d6c8cd3bf4408ac4759862b6535d587156545328edd7d6b909875f00ec9e8"} Oct 14 13:32:40 crc kubenswrapper[4725]: I1014 13:32:40.639228 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3ffcd111-cd30-4d4e-933f-5c82abf7de93","Type":"ContainerStarted","Data":"c5f1540ecdbcaeee50095e49491ef4b4ffbfeedabc80bee9449fe88d1de954af"} Oct 14 13:32:40 crc kubenswrapper[4725]: I1014 13:32:40.639364 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3ffcd111-cd30-4d4e-933f-5c82abf7de93" containerName="glance-log" containerID="cri-o://6eb6ad83bc4cc4da3ee0f8560a09d6dbd5860139c4f02479cf960e0fe8528476" gracePeriod=30 Oct 14 13:32:40 crc kubenswrapper[4725]: I1014 13:32:40.639673 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3ffcd111-cd30-4d4e-933f-5c82abf7de93" containerName="glance-httpd" containerID="cri-o://c5f1540ecdbcaeee50095e49491ef4b4ffbfeedabc80bee9449fe88d1de954af" gracePeriod=30 Oct 14 13:32:40 crc kubenswrapper[4725]: I1014 13:32:40.643876 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tlq7r" event={"ID":"22b0bf13-3251-446f-946b-273f89349427","Type":"ContainerStarted","Data":"63ea4889fdd1cf7ea19670415274686ecefdd907f156d76b7fdf6e60e92f63df"} Oct 14 13:32:40 crc kubenswrapper[4725]: I1014 13:32:40.660588 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=26.660570483 podStartE2EDuration="26.660570483s" podCreationTimestamp="2025-10-14 13:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:32:40.657867469 +0000 UTC m=+1077.506302318" watchObservedRunningTime="2025-10-14 13:32:40.660570483 +0000 UTC m=+1077.509005292" Oct 14 13:32:40 crc kubenswrapper[4725]: I1014 13:32:40.661414 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-fgqxw" event={"ID":"7d968db2-d49e-4eee-9927-11fd32b9cd89","Type":"ContainerStarted","Data":"43c4927714fb8b7e1659f7d263e061d3f13e6778d651fd5983c683b963b584ae"} Oct 14 13:32:40 crc kubenswrapper[4725]: I1014 13:32:40.666165 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cdf854644-xbv6p" event={"ID":"0f50192b-c5ae-418d-9d3a-a670d49f8ded","Type":"ContainerStarted","Data":"d08e42527354d20248fd329be4fe97bde0876b1f9006e253a54d395d4b03bee1"} Oct 14 13:32:40 crc kubenswrapper[4725]: I1014 13:32:40.681876 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-fgqxw" podStartSLOduration=3.30688106 podStartE2EDuration="32.681858792s" podCreationTimestamp="2025-10-14 13:32:08 +0000 UTC" firstStartedPulling="2025-10-14 13:32:10.413157703 +0000 UTC m=+1047.261592512" lastFinishedPulling="2025-10-14 13:32:39.788135435 +0000 UTC m=+1076.636570244" observedRunningTime="2025-10-14 13:32:40.676971789 +0000 UTC m=+1077.525406598" watchObservedRunningTime="2025-10-14 13:32:40.681858792 +0000 UTC m=+1077.530293601" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.007403 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-h66x8"] Oct 14 13:32:41 crc kubenswrapper[4725]: E1014 13:32:41.008127 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0" containerName="neutron-db-sync" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.008143 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0" containerName="neutron-db-sync" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.008566 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0" containerName="neutron-db-sync" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.009545 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-h66x8" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.026071 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-h66x8"] Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.138752 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/276146f9-c73f-4f38-b4b2-bba0a70221fe-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-h66x8\" (UID: \"276146f9-c73f-4f38-b4b2-bba0a70221fe\") " pod="openstack/dnsmasq-dns-84b966f6c9-h66x8" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.138855 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/276146f9-c73f-4f38-b4b2-bba0a70221fe-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-h66x8\" (UID: \"276146f9-c73f-4f38-b4b2-bba0a70221fe\") " pod="openstack/dnsmasq-dns-84b966f6c9-h66x8" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.138902 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/276146f9-c73f-4f38-b4b2-bba0a70221fe-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-h66x8\" (UID: \"276146f9-c73f-4f38-b4b2-bba0a70221fe\") " pod="openstack/dnsmasq-dns-84b966f6c9-h66x8" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.138998 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/276146f9-c73f-4f38-b4b2-bba0a70221fe-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-h66x8\" (UID: \"276146f9-c73f-4f38-b4b2-bba0a70221fe\") " pod="openstack/dnsmasq-dns-84b966f6c9-h66x8" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.139023 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69zss\" (UniqueName: \"kubernetes.io/projected/276146f9-c73f-4f38-b4b2-bba0a70221fe-kube-api-access-69zss\") pod \"dnsmasq-dns-84b966f6c9-h66x8\" (UID: \"276146f9-c73f-4f38-b4b2-bba0a70221fe\") " pod="openstack/dnsmasq-dns-84b966f6c9-h66x8" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.139065 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/276146f9-c73f-4f38-b4b2-bba0a70221fe-config\") pod \"dnsmasq-dns-84b966f6c9-h66x8\" (UID: \"276146f9-c73f-4f38-b4b2-bba0a70221fe\") " pod="openstack/dnsmasq-dns-84b966f6c9-h66x8" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.242189 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/276146f9-c73f-4f38-b4b2-bba0a70221fe-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-h66x8\" (UID: \"276146f9-c73f-4f38-b4b2-bba0a70221fe\") " pod="openstack/dnsmasq-dns-84b966f6c9-h66x8" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.242265 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69zss\" (UniqueName: \"kubernetes.io/projected/276146f9-c73f-4f38-b4b2-bba0a70221fe-kube-api-access-69zss\") pod \"dnsmasq-dns-84b966f6c9-h66x8\" (UID: \"276146f9-c73f-4f38-b4b2-bba0a70221fe\") " pod="openstack/dnsmasq-dns-84b966f6c9-h66x8" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.242328 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/276146f9-c73f-4f38-b4b2-bba0a70221fe-config\") pod \"dnsmasq-dns-84b966f6c9-h66x8\" (UID: \"276146f9-c73f-4f38-b4b2-bba0a70221fe\") " pod="openstack/dnsmasq-dns-84b966f6c9-h66x8" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.242396 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/276146f9-c73f-4f38-b4b2-bba0a70221fe-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-h66x8\" (UID: \"276146f9-c73f-4f38-b4b2-bba0a70221fe\") " pod="openstack/dnsmasq-dns-84b966f6c9-h66x8" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.242628 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/276146f9-c73f-4f38-b4b2-bba0a70221fe-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-h66x8\" (UID: \"276146f9-c73f-4f38-b4b2-bba0a70221fe\") " pod="openstack/dnsmasq-dns-84b966f6c9-h66x8" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.242911 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5cb9f8bc48-bkgjx"] Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.243555 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/276146f9-c73f-4f38-b4b2-bba0a70221fe-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-h66x8\" (UID: \"276146f9-c73f-4f38-b4b2-bba0a70221fe\") " pod="openstack/dnsmasq-dns-84b966f6c9-h66x8" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.244191 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cb9f8bc48-bkgjx" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.246831 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/276146f9-c73f-4f38-b4b2-bba0a70221fe-config\") pod \"dnsmasq-dns-84b966f6c9-h66x8\" (UID: \"276146f9-c73f-4f38-b4b2-bba0a70221fe\") " pod="openstack/dnsmasq-dns-84b966f6c9-h66x8" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.247130 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/276146f9-c73f-4f38-b4b2-bba0a70221fe-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-h66x8\" (UID: \"276146f9-c73f-4f38-b4b2-bba0a70221fe\") " pod="openstack/dnsmasq-dns-84b966f6c9-h66x8" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.247338 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/276146f9-c73f-4f38-b4b2-bba0a70221fe-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-h66x8\" (UID: \"276146f9-c73f-4f38-b4b2-bba0a70221fe\") " pod="openstack/dnsmasq-dns-84b966f6c9-h66x8" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.247380 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.262250 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/276146f9-c73f-4f38-b4b2-bba0a70221fe-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-h66x8\" (UID: \"276146f9-c73f-4f38-b4b2-bba0a70221fe\") " pod="openstack/dnsmasq-dns-84b966f6c9-h66x8" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.262968 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-6mvtz" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.263282 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.265025 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/276146f9-c73f-4f38-b4b2-bba0a70221fe-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-h66x8\" (UID: \"276146f9-c73f-4f38-b4b2-bba0a70221fe\") " pod="openstack/dnsmasq-dns-84b966f6c9-h66x8" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.267406 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.277605 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69zss\" (UniqueName: \"kubernetes.io/projected/276146f9-c73f-4f38-b4b2-bba0a70221fe-kube-api-access-69zss\") pod \"dnsmasq-dns-84b966f6c9-h66x8\" (UID: \"276146f9-c73f-4f38-b4b2-bba0a70221fe\") " pod="openstack/dnsmasq-dns-84b966f6c9-h66x8" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.281859 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5cb9f8bc48-bkgjx"] Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.327807 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-h66x8" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.331822 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.345638 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caca4422-132e-491b-8070-f4e9c4c8ff3a-combined-ca-bundle\") pod \"neutron-5cb9f8bc48-bkgjx\" (UID: \"caca4422-132e-491b-8070-f4e9c4c8ff3a\") " pod="openstack/neutron-5cb9f8bc48-bkgjx" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.345727 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/caca4422-132e-491b-8070-f4e9c4c8ff3a-config\") pod \"neutron-5cb9f8bc48-bkgjx\" (UID: \"caca4422-132e-491b-8070-f4e9c4c8ff3a\") " pod="openstack/neutron-5cb9f8bc48-bkgjx" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.345756 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/caca4422-132e-491b-8070-f4e9c4c8ff3a-httpd-config\") pod \"neutron-5cb9f8bc48-bkgjx\" (UID: \"caca4422-132e-491b-8070-f4e9c4c8ff3a\") " pod="openstack/neutron-5cb9f8bc48-bkgjx" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.345791 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxdmt\" (UniqueName: \"kubernetes.io/projected/caca4422-132e-491b-8070-f4e9c4c8ff3a-kube-api-access-rxdmt\") pod \"neutron-5cb9f8bc48-bkgjx\" (UID: \"caca4422-132e-491b-8070-f4e9c4c8ff3a\") " pod="openstack/neutron-5cb9f8bc48-bkgjx" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.345810 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/caca4422-132e-491b-8070-f4e9c4c8ff3a-ovndb-tls-certs\") pod \"neutron-5cb9f8bc48-bkgjx\" (UID: \"caca4422-132e-491b-8070-f4e9c4c8ff3a\") " pod="openstack/neutron-5cb9f8bc48-bkgjx" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.447258 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\" (UID: \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\") " Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.447569 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ffcd111-cd30-4d4e-933f-5c82abf7de93-config-data\") pod \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\" (UID: \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\") " Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.447695 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ffcd111-cd30-4d4e-933f-5c82abf7de93-scripts\") pod \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\" (UID: \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\") " Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.447721 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ffcd111-cd30-4d4e-933f-5c82abf7de93-httpd-run\") pod \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\" (UID: \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\") " Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.447790 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ffcd111-cd30-4d4e-933f-5c82abf7de93-combined-ca-bundle\") pod \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\" (UID: \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\") " Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.447830 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ffcd111-cd30-4d4e-933f-5c82abf7de93-logs\") pod \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\" (UID: \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\") " Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.447860 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppxvr\" (UniqueName: \"kubernetes.io/projected/3ffcd111-cd30-4d4e-933f-5c82abf7de93-kube-api-access-ppxvr\") pod \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\" (UID: \"3ffcd111-cd30-4d4e-933f-5c82abf7de93\") " Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.448149 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/caca4422-132e-491b-8070-f4e9c4c8ff3a-config\") pod \"neutron-5cb9f8bc48-bkgjx\" (UID: \"caca4422-132e-491b-8070-f4e9c4c8ff3a\") " pod="openstack/neutron-5cb9f8bc48-bkgjx" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.448203 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/caca4422-132e-491b-8070-f4e9c4c8ff3a-httpd-config\") pod \"neutron-5cb9f8bc48-bkgjx\" (UID: \"caca4422-132e-491b-8070-f4e9c4c8ff3a\") " pod="openstack/neutron-5cb9f8bc48-bkgjx" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.448256 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxdmt\" (UniqueName: \"kubernetes.io/projected/caca4422-132e-491b-8070-f4e9c4c8ff3a-kube-api-access-rxdmt\") pod \"neutron-5cb9f8bc48-bkgjx\" (UID: \"caca4422-132e-491b-8070-f4e9c4c8ff3a\") " pod="openstack/neutron-5cb9f8bc48-bkgjx" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.448285 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/caca4422-132e-491b-8070-f4e9c4c8ff3a-ovndb-tls-certs\") pod \"neutron-5cb9f8bc48-bkgjx\" (UID: \"caca4422-132e-491b-8070-f4e9c4c8ff3a\") " pod="openstack/neutron-5cb9f8bc48-bkgjx" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.448370 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caca4422-132e-491b-8070-f4e9c4c8ff3a-combined-ca-bundle\") pod \"neutron-5cb9f8bc48-bkgjx\" (UID: \"caca4422-132e-491b-8070-f4e9c4c8ff3a\") " pod="openstack/neutron-5cb9f8bc48-bkgjx" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.452778 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "3ffcd111-cd30-4d4e-933f-5c82abf7de93" (UID: "3ffcd111-cd30-4d4e-933f-5c82abf7de93"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.454934 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ffcd111-cd30-4d4e-933f-5c82abf7de93-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3ffcd111-cd30-4d4e-933f-5c82abf7de93" (UID: "3ffcd111-cd30-4d4e-933f-5c82abf7de93"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.455282 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ffcd111-cd30-4d4e-933f-5c82abf7de93-kube-api-access-ppxvr" (OuterVolumeSpecName: "kube-api-access-ppxvr") pod "3ffcd111-cd30-4d4e-933f-5c82abf7de93" (UID: "3ffcd111-cd30-4d4e-933f-5c82abf7de93"). InnerVolumeSpecName "kube-api-access-ppxvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.456246 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/caca4422-132e-491b-8070-f4e9c4c8ff3a-httpd-config\") pod \"neutron-5cb9f8bc48-bkgjx\" (UID: \"caca4422-132e-491b-8070-f4e9c4c8ff3a\") " pod="openstack/neutron-5cb9f8bc48-bkgjx" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.456390 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ffcd111-cd30-4d4e-933f-5c82abf7de93-logs" (OuterVolumeSpecName: "logs") pod "3ffcd111-cd30-4d4e-933f-5c82abf7de93" (UID: "3ffcd111-cd30-4d4e-933f-5c82abf7de93"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.457670 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caca4422-132e-491b-8070-f4e9c4c8ff3a-combined-ca-bundle\") pod \"neutron-5cb9f8bc48-bkgjx\" (UID: \"caca4422-132e-491b-8070-f4e9c4c8ff3a\") " pod="openstack/neutron-5cb9f8bc48-bkgjx" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.458925 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ffcd111-cd30-4d4e-933f-5c82abf7de93-scripts" (OuterVolumeSpecName: "scripts") pod "3ffcd111-cd30-4d4e-933f-5c82abf7de93" (UID: "3ffcd111-cd30-4d4e-933f-5c82abf7de93"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.467330 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/caca4422-132e-491b-8070-f4e9c4c8ff3a-config\") pod \"neutron-5cb9f8bc48-bkgjx\" (UID: \"caca4422-132e-491b-8070-f4e9c4c8ff3a\") " pod="openstack/neutron-5cb9f8bc48-bkgjx" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.476225 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxdmt\" (UniqueName: \"kubernetes.io/projected/caca4422-132e-491b-8070-f4e9c4c8ff3a-kube-api-access-rxdmt\") pod \"neutron-5cb9f8bc48-bkgjx\" (UID: \"caca4422-132e-491b-8070-f4e9c4c8ff3a\") " pod="openstack/neutron-5cb9f8bc48-bkgjx" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.484334 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/caca4422-132e-491b-8070-f4e9c4c8ff3a-ovndb-tls-certs\") pod \"neutron-5cb9f8bc48-bkgjx\" (UID: \"caca4422-132e-491b-8070-f4e9c4c8ff3a\") " pod="openstack/neutron-5cb9f8bc48-bkgjx" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.542032 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ffcd111-cd30-4d4e-933f-5c82abf7de93-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ffcd111-cd30-4d4e-933f-5c82abf7de93" (UID: "3ffcd111-cd30-4d4e-933f-5c82abf7de93"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.550835 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ffcd111-cd30-4d4e-933f-5c82abf7de93-config-data" (OuterVolumeSpecName: "config-data") pod "3ffcd111-cd30-4d4e-933f-5c82abf7de93" (UID: "3ffcd111-cd30-4d4e-933f-5c82abf7de93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.553199 4725 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.553230 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ffcd111-cd30-4d4e-933f-5c82abf7de93-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.553241 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ffcd111-cd30-4d4e-933f-5c82abf7de93-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.553250 4725 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ffcd111-cd30-4d4e-933f-5c82abf7de93-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.553258 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ffcd111-cd30-4d4e-933f-5c82abf7de93-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.553268 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ffcd111-cd30-4d4e-933f-5c82abf7de93-logs\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.553275 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppxvr\" (UniqueName: \"kubernetes.io/projected/3ffcd111-cd30-4d4e-933f-5c82abf7de93-kube-api-access-ppxvr\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.585547 4725 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.589938 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cb9f8bc48-bkgjx" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.654428 4725 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.699180 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tlq7r" event={"ID":"22b0bf13-3251-446f-946b-273f89349427","Type":"ContainerStarted","Data":"9de7d1d9280f5273ba36dc02ca1461284edcba1bfbe3ccada9299d9b936c41d4"} Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.709699 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cdf854644-xbv6p" event={"ID":"0f50192b-c5ae-418d-9d3a-a670d49f8ded","Type":"ContainerStarted","Data":"8bfb27e886e39894d3b543bf7c0d1c48a1dabfaf115d85b02d37e95ba51f3de0"} Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.709742 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cdf854644-xbv6p" event={"ID":"0f50192b-c5ae-418d-9d3a-a670d49f8ded","Type":"ContainerStarted","Data":"bab2a180b0d207aed698af0fe8fea444947e1d0173fb8eff998bf222a872f130"} Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.720134 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-tlq7r" podStartSLOduration=22.720110241 podStartE2EDuration="22.720110241s" podCreationTimestamp="2025-10-14 13:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:32:41.713521662 +0000 UTC m=+1078.561956481" watchObservedRunningTime="2025-10-14 13:32:41.720110241 +0000 UTC m=+1078.568545090" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.721010 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5d89387e-949e-49c8-b6a1-543aaa1a02d5","Type":"ContainerStarted","Data":"246420b5ad57920b0bb890f04077bcafa9a3e0e0138ffaedc093baaedfa039cf"} Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.722974 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d5c84b44-vssnm" event={"ID":"551500de-77a9-4f28-ab4b-b8259e04804b","Type":"ContainerStarted","Data":"2f06fee29194370b1ad8593cc12b848a47dce9aefe7a38963cecbdc075b21a71"} Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.743002 4725 generic.go:334] "Generic (PLEG): container finished" podID="3ffcd111-cd30-4d4e-933f-5c82abf7de93" containerID="c5f1540ecdbcaeee50095e49491ef4b4ffbfeedabc80bee9449fe88d1de954af" exitCode=0 Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.743043 4725 generic.go:334] "Generic (PLEG): container finished" podID="3ffcd111-cd30-4d4e-933f-5c82abf7de93" containerID="6eb6ad83bc4cc4da3ee0f8560a09d6dbd5860139c4f02479cf960e0fe8528476" exitCode=143 Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.743598 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.743645 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3ffcd111-cd30-4d4e-933f-5c82abf7de93","Type":"ContainerDied","Data":"c5f1540ecdbcaeee50095e49491ef4b4ffbfeedabc80bee9449fe88d1de954af"} Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.743716 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3ffcd111-cd30-4d4e-933f-5c82abf7de93","Type":"ContainerDied","Data":"6eb6ad83bc4cc4da3ee0f8560a09d6dbd5860139c4f02479cf960e0fe8528476"} Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.743729 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3ffcd111-cd30-4d4e-933f-5c82abf7de93","Type":"ContainerDied","Data":"b48f4cb54af856670f4177c8d8efc0debd3c6ff6265dd2b0b252ab37e31687d6"} Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.743764 4725 scope.go:117] "RemoveContainer" containerID="c5f1540ecdbcaeee50095e49491ef4b4ffbfeedabc80bee9449fe88d1de954af" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.770379 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7cdf854644-xbv6p" podStartSLOduration=22.770362367 podStartE2EDuration="22.770362367s" podCreationTimestamp="2025-10-14 13:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:32:41.741988886 +0000 UTC m=+1078.590423705" watchObservedRunningTime="2025-10-14 13:32:41.770362367 +0000 UTC m=+1078.618797176" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.771227 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-9d5c84b44-vssnm" podStartSLOduration=22.307666684 podStartE2EDuration="22.771222772s" podCreationTimestamp="2025-10-14 13:32:19 +0000 UTC" firstStartedPulling="2025-10-14 13:32:39.807018368 +0000 UTC m=+1076.655453187" lastFinishedPulling="2025-10-14 13:32:40.270574466 +0000 UTC m=+1077.119009275" observedRunningTime="2025-10-14 13:32:41.76676055 +0000 UTC m=+1078.615195359" watchObservedRunningTime="2025-10-14 13:32:41.771222772 +0000 UTC m=+1078.619657581" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.792136 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.814321 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.846679 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 13:32:41 crc kubenswrapper[4725]: E1014 13:32:41.847181 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ffcd111-cd30-4d4e-933f-5c82abf7de93" containerName="glance-log" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.847199 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ffcd111-cd30-4d4e-933f-5c82abf7de93" containerName="glance-log" Oct 14 13:32:41 crc kubenswrapper[4725]: E1014 13:32:41.847217 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ffcd111-cd30-4d4e-933f-5c82abf7de93" containerName="glance-httpd" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.847225 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ffcd111-cd30-4d4e-933f-5c82abf7de93" containerName="glance-httpd" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.847472 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ffcd111-cd30-4d4e-933f-5c82abf7de93" containerName="glance-httpd" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.847504 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ffcd111-cd30-4d4e-933f-5c82abf7de93" containerName="glance-log" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.848821 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.854318 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.854550 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.854626 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.904499 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-h66x8"] Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.942980 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ffcd111-cd30-4d4e-933f-5c82abf7de93" path="/var/lib/kubelet/pods/3ffcd111-cd30-4d4e-933f-5c82abf7de93/volumes" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.944536 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f775e4a2-1844-4b66-ada3-15ce61e06ed7" path="/var/lib/kubelet/pods/f775e4a2-1844-4b66-ada3-15ce61e06ed7/volumes" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.962241 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"7756dc76-bc61-4bab-a662-649e33ffc929\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.962321 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7756dc76-bc61-4bab-a662-649e33ffc929-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7756dc76-bc61-4bab-a662-649e33ffc929\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.962341 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7756dc76-bc61-4bab-a662-649e33ffc929-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7756dc76-bc61-4bab-a662-649e33ffc929\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.962403 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7756dc76-bc61-4bab-a662-649e33ffc929-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7756dc76-bc61-4bab-a662-649e33ffc929\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.962444 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7756dc76-bc61-4bab-a662-649e33ffc929-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7756dc76-bc61-4bab-a662-649e33ffc929\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.962486 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7756dc76-bc61-4bab-a662-649e33ffc929-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7756dc76-bc61-4bab-a662-649e33ffc929\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.962557 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prls4\" (UniqueName: \"kubernetes.io/projected/7756dc76-bc61-4bab-a662-649e33ffc929-kube-api-access-prls4\") pod \"glance-default-internal-api-0\" (UID: \"7756dc76-bc61-4bab-a662-649e33ffc929\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:41 crc kubenswrapper[4725]: I1014 13:32:41.962578 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7756dc76-bc61-4bab-a662-649e33ffc929-logs\") pod \"glance-default-internal-api-0\" (UID: \"7756dc76-bc61-4bab-a662-649e33ffc929\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.074282 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7756dc76-bc61-4bab-a662-649e33ffc929-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7756dc76-bc61-4bab-a662-649e33ffc929\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.074807 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7756dc76-bc61-4bab-a662-649e33ffc929-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7756dc76-bc61-4bab-a662-649e33ffc929\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.075025 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7756dc76-bc61-4bab-a662-649e33ffc929-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7756dc76-bc61-4bab-a662-649e33ffc929\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.075159 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7756dc76-bc61-4bab-a662-649e33ffc929-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7756dc76-bc61-4bab-a662-649e33ffc929\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.075196 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7756dc76-bc61-4bab-a662-649e33ffc929-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7756dc76-bc61-4bab-a662-649e33ffc929\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.075258 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prls4\" (UniqueName: \"kubernetes.io/projected/7756dc76-bc61-4bab-a662-649e33ffc929-kube-api-access-prls4\") pod \"glance-default-internal-api-0\" (UID: \"7756dc76-bc61-4bab-a662-649e33ffc929\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.075300 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7756dc76-bc61-4bab-a662-649e33ffc929-logs\") pod \"glance-default-internal-api-0\" (UID: \"7756dc76-bc61-4bab-a662-649e33ffc929\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.075387 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"7756dc76-bc61-4bab-a662-649e33ffc929\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.075991 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"7756dc76-bc61-4bab-a662-649e33ffc929\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.081778 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7756dc76-bc61-4bab-a662-649e33ffc929-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7756dc76-bc61-4bab-a662-649e33ffc929\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.082562 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7756dc76-bc61-4bab-a662-649e33ffc929-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7756dc76-bc61-4bab-a662-649e33ffc929\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.085359 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7756dc76-bc61-4bab-a662-649e33ffc929-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7756dc76-bc61-4bab-a662-649e33ffc929\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.085834 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7756dc76-bc61-4bab-a662-649e33ffc929-logs\") pod \"glance-default-internal-api-0\" (UID: \"7756dc76-bc61-4bab-a662-649e33ffc929\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.086059 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7756dc76-bc61-4bab-a662-649e33ffc929-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7756dc76-bc61-4bab-a662-649e33ffc929\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.089268 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7756dc76-bc61-4bab-a662-649e33ffc929-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7756dc76-bc61-4bab-a662-649e33ffc929\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.107913 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prls4\" (UniqueName: \"kubernetes.io/projected/7756dc76-bc61-4bab-a662-649e33ffc929-kube-api-access-prls4\") pod \"glance-default-internal-api-0\" (UID: \"7756dc76-bc61-4bab-a662-649e33ffc929\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.134365 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"7756dc76-bc61-4bab-a662-649e33ffc929\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.182628 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.460037 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5cb9f8bc48-bkgjx"] Oct 14 13:32:42 crc kubenswrapper[4725]: W1014 13:32:42.662220 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcaca4422_132e_491b_8070_f4e9c4c8ff3a.slice/crio-a2d804e6be8a443f7b31e99a08d19393024a3aadccb49aba5c6d98df71b0f58b WatchSource:0}: Error finding container a2d804e6be8a443f7b31e99a08d19393024a3aadccb49aba5c6d98df71b0f58b: Status 404 returned error can't find the container with id a2d804e6be8a443f7b31e99a08d19393024a3aadccb49aba5c6d98df71b0f58b Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.714731 4725 scope.go:117] "RemoveContainer" containerID="6eb6ad83bc4cc4da3ee0f8560a09d6dbd5860139c4f02479cf960e0fe8528476" Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.764415 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cb9f8bc48-bkgjx" event={"ID":"caca4422-132e-491b-8070-f4e9c4c8ff3a","Type":"ContainerStarted","Data":"a2d804e6be8a443f7b31e99a08d19393024a3aadccb49aba5c6d98df71b0f58b"} Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.771803 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5d89387e-949e-49c8-b6a1-543aaa1a02d5","Type":"ContainerStarted","Data":"90a394f171260f06a0c197fccb5ae0a94b79b2b8a61fa80072a9328aee753ccb"} Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.783842 4725 generic.go:334] "Generic (PLEG): container finished" podID="7d968db2-d49e-4eee-9927-11fd32b9cd89" containerID="43c4927714fb8b7e1659f7d263e061d3f13e6778d651fd5983c683b963b584ae" exitCode=0 Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.783953 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-fgqxw" event={"ID":"7d968db2-d49e-4eee-9927-11fd32b9cd89","Type":"ContainerDied","Data":"43c4927714fb8b7e1659f7d263e061d3f13e6778d651fd5983c683b963b584ae"} Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.792229 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-h66x8" event={"ID":"276146f9-c73f-4f38-b4b2-bba0a70221fe","Type":"ContainerStarted","Data":"9a428f539459c969831612f64240fca06b9653881fb5ae5255e65f640b40eb4b"} Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.817569 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=20.817547049 podStartE2EDuration="20.817547049s" podCreationTimestamp="2025-10-14 13:32:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:32:42.797616947 +0000 UTC m=+1079.646051756" watchObservedRunningTime="2025-10-14 13:32:42.817547049 +0000 UTC m=+1079.665981868" Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.862421 4725 scope.go:117] "RemoveContainer" containerID="c5f1540ecdbcaeee50095e49491ef4b4ffbfeedabc80bee9449fe88d1de954af" Oct 14 13:32:42 crc kubenswrapper[4725]: E1014 13:32:42.865388 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5f1540ecdbcaeee50095e49491ef4b4ffbfeedabc80bee9449fe88d1de954af\": container with ID starting with c5f1540ecdbcaeee50095e49491ef4b4ffbfeedabc80bee9449fe88d1de954af not found: ID does not exist" containerID="c5f1540ecdbcaeee50095e49491ef4b4ffbfeedabc80bee9449fe88d1de954af" Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.865414 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5f1540ecdbcaeee50095e49491ef4b4ffbfeedabc80bee9449fe88d1de954af"} err="failed to get container status \"c5f1540ecdbcaeee50095e49491ef4b4ffbfeedabc80bee9449fe88d1de954af\": rpc error: code = NotFound desc = could not find container \"c5f1540ecdbcaeee50095e49491ef4b4ffbfeedabc80bee9449fe88d1de954af\": container with ID starting with c5f1540ecdbcaeee50095e49491ef4b4ffbfeedabc80bee9449fe88d1de954af not found: ID does not exist" Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.865553 4725 scope.go:117] "RemoveContainer" containerID="6eb6ad83bc4cc4da3ee0f8560a09d6dbd5860139c4f02479cf960e0fe8528476" Oct 14 13:32:42 crc kubenswrapper[4725]: E1014 13:32:42.866400 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6eb6ad83bc4cc4da3ee0f8560a09d6dbd5860139c4f02479cf960e0fe8528476\": container with ID starting with 6eb6ad83bc4cc4da3ee0f8560a09d6dbd5860139c4f02479cf960e0fe8528476 not found: ID does not exist" containerID="6eb6ad83bc4cc4da3ee0f8560a09d6dbd5860139c4f02479cf960e0fe8528476" Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.866475 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6eb6ad83bc4cc4da3ee0f8560a09d6dbd5860139c4f02479cf960e0fe8528476"} err="failed to get container status \"6eb6ad83bc4cc4da3ee0f8560a09d6dbd5860139c4f02479cf960e0fe8528476\": rpc error: code = NotFound desc = could not find container \"6eb6ad83bc4cc4da3ee0f8560a09d6dbd5860139c4f02479cf960e0fe8528476\": container with ID starting with 6eb6ad83bc4cc4da3ee0f8560a09d6dbd5860139c4f02479cf960e0fe8528476 not found: ID does not exist" Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.866508 4725 scope.go:117] "RemoveContainer" containerID="c5f1540ecdbcaeee50095e49491ef4b4ffbfeedabc80bee9449fe88d1de954af" Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.866778 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5f1540ecdbcaeee50095e49491ef4b4ffbfeedabc80bee9449fe88d1de954af"} err="failed to get container status \"c5f1540ecdbcaeee50095e49491ef4b4ffbfeedabc80bee9449fe88d1de954af\": rpc error: code = NotFound desc = could not find container \"c5f1540ecdbcaeee50095e49491ef4b4ffbfeedabc80bee9449fe88d1de954af\": container with ID starting with c5f1540ecdbcaeee50095e49491ef4b4ffbfeedabc80bee9449fe88d1de954af not found: ID does not exist" Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.866799 4725 scope.go:117] "RemoveContainer" containerID="6eb6ad83bc4cc4da3ee0f8560a09d6dbd5860139c4f02479cf960e0fe8528476" Oct 14 13:32:42 crc kubenswrapper[4725]: I1014 13:32:42.867067 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6eb6ad83bc4cc4da3ee0f8560a09d6dbd5860139c4f02479cf960e0fe8528476"} err="failed to get container status \"6eb6ad83bc4cc4da3ee0f8560a09d6dbd5860139c4f02479cf960e0fe8528476\": rpc error: code = NotFound desc = could not find container \"6eb6ad83bc4cc4da3ee0f8560a09d6dbd5860139c4f02479cf960e0fe8528476\": container with ID starting with 6eb6ad83bc4cc4da3ee0f8560a09d6dbd5860139c4f02479cf960e0fe8528476 not found: ID does not exist" Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.125266 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.125656 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.182154 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.186941 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.325885 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.442000 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-846bc7f557-srw79"] Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.443673 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-846bc7f557-srw79" Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.445556 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.445794 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.476148 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-846bc7f557-srw79"] Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.524222 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df9cl\" (UniqueName: \"kubernetes.io/projected/8a4fb75f-e202-4df3-bb0c-bc889dd701d7-kube-api-access-df9cl\") pod \"neutron-846bc7f557-srw79\" (UID: \"8a4fb75f-e202-4df3-bb0c-bc889dd701d7\") " pod="openstack/neutron-846bc7f557-srw79" Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.524281 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4fb75f-e202-4df3-bb0c-bc889dd701d7-ovndb-tls-certs\") pod \"neutron-846bc7f557-srw79\" (UID: \"8a4fb75f-e202-4df3-bb0c-bc889dd701d7\") " pod="openstack/neutron-846bc7f557-srw79" Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.524487 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8a4fb75f-e202-4df3-bb0c-bc889dd701d7-config\") pod \"neutron-846bc7f557-srw79\" (UID: \"8a4fb75f-e202-4df3-bb0c-bc889dd701d7\") " pod="openstack/neutron-846bc7f557-srw79" Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.524565 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4fb75f-e202-4df3-bb0c-bc889dd701d7-public-tls-certs\") pod \"neutron-846bc7f557-srw79\" (UID: \"8a4fb75f-e202-4df3-bb0c-bc889dd701d7\") " pod="openstack/neutron-846bc7f557-srw79" Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.524767 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a4fb75f-e202-4df3-bb0c-bc889dd701d7-combined-ca-bundle\") pod \"neutron-846bc7f557-srw79\" (UID: \"8a4fb75f-e202-4df3-bb0c-bc889dd701d7\") " pod="openstack/neutron-846bc7f557-srw79" Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.524870 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4fb75f-e202-4df3-bb0c-bc889dd701d7-internal-tls-certs\") pod \"neutron-846bc7f557-srw79\" (UID: \"8a4fb75f-e202-4df3-bb0c-bc889dd701d7\") " pod="openstack/neutron-846bc7f557-srw79" Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.524973 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8a4fb75f-e202-4df3-bb0c-bc889dd701d7-httpd-config\") pod \"neutron-846bc7f557-srw79\" (UID: \"8a4fb75f-e202-4df3-bb0c-bc889dd701d7\") " pod="openstack/neutron-846bc7f557-srw79" Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.627017 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8a4fb75f-e202-4df3-bb0c-bc889dd701d7-config\") pod \"neutron-846bc7f557-srw79\" (UID: \"8a4fb75f-e202-4df3-bb0c-bc889dd701d7\") " pod="openstack/neutron-846bc7f557-srw79" Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.627074 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4fb75f-e202-4df3-bb0c-bc889dd701d7-public-tls-certs\") pod \"neutron-846bc7f557-srw79\" (UID: \"8a4fb75f-e202-4df3-bb0c-bc889dd701d7\") " pod="openstack/neutron-846bc7f557-srw79" Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.627136 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a4fb75f-e202-4df3-bb0c-bc889dd701d7-combined-ca-bundle\") pod \"neutron-846bc7f557-srw79\" (UID: \"8a4fb75f-e202-4df3-bb0c-bc889dd701d7\") " pod="openstack/neutron-846bc7f557-srw79" Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.627167 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4fb75f-e202-4df3-bb0c-bc889dd701d7-internal-tls-certs\") pod \"neutron-846bc7f557-srw79\" (UID: \"8a4fb75f-e202-4df3-bb0c-bc889dd701d7\") " pod="openstack/neutron-846bc7f557-srw79" Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.627208 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8a4fb75f-e202-4df3-bb0c-bc889dd701d7-httpd-config\") pod \"neutron-846bc7f557-srw79\" (UID: \"8a4fb75f-e202-4df3-bb0c-bc889dd701d7\") " pod="openstack/neutron-846bc7f557-srw79" Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.627246 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df9cl\" (UniqueName: \"kubernetes.io/projected/8a4fb75f-e202-4df3-bb0c-bc889dd701d7-kube-api-access-df9cl\") pod \"neutron-846bc7f557-srw79\" (UID: \"8a4fb75f-e202-4df3-bb0c-bc889dd701d7\") " pod="openstack/neutron-846bc7f557-srw79" Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.627261 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4fb75f-e202-4df3-bb0c-bc889dd701d7-ovndb-tls-certs\") pod \"neutron-846bc7f557-srw79\" (UID: \"8a4fb75f-e202-4df3-bb0c-bc889dd701d7\") " pod="openstack/neutron-846bc7f557-srw79" Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.641256 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4fb75f-e202-4df3-bb0c-bc889dd701d7-public-tls-certs\") pod \"neutron-846bc7f557-srw79\" (UID: \"8a4fb75f-e202-4df3-bb0c-bc889dd701d7\") " pod="openstack/neutron-846bc7f557-srw79" Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.641369 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a4fb75f-e202-4df3-bb0c-bc889dd701d7-combined-ca-bundle\") pod \"neutron-846bc7f557-srw79\" (UID: \"8a4fb75f-e202-4df3-bb0c-bc889dd701d7\") " pod="openstack/neutron-846bc7f557-srw79" Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.641385 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4fb75f-e202-4df3-bb0c-bc889dd701d7-internal-tls-certs\") pod \"neutron-846bc7f557-srw79\" (UID: \"8a4fb75f-e202-4df3-bb0c-bc889dd701d7\") " pod="openstack/neutron-846bc7f557-srw79" Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.641514 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8a4fb75f-e202-4df3-bb0c-bc889dd701d7-config\") pod \"neutron-846bc7f557-srw79\" (UID: \"8a4fb75f-e202-4df3-bb0c-bc889dd701d7\") " pod="openstack/neutron-846bc7f557-srw79" Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.642221 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a4fb75f-e202-4df3-bb0c-bc889dd701d7-ovndb-tls-certs\") pod \"neutron-846bc7f557-srw79\" (UID: \"8a4fb75f-e202-4df3-bb0c-bc889dd701d7\") " pod="openstack/neutron-846bc7f557-srw79" Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.650663 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8a4fb75f-e202-4df3-bb0c-bc889dd701d7-httpd-config\") pod \"neutron-846bc7f557-srw79\" (UID: \"8a4fb75f-e202-4df3-bb0c-bc889dd701d7\") " pod="openstack/neutron-846bc7f557-srw79" Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.654795 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df9cl\" (UniqueName: \"kubernetes.io/projected/8a4fb75f-e202-4df3-bb0c-bc889dd701d7-kube-api-access-df9cl\") pod \"neutron-846bc7f557-srw79\" (UID: \"8a4fb75f-e202-4df3-bb0c-bc889dd701d7\") " pod="openstack/neutron-846bc7f557-srw79" Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.791055 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-846bc7f557-srw79" Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.803019 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cb9f8bc48-bkgjx" event={"ID":"caca4422-132e-491b-8070-f4e9c4c8ff3a","Type":"ContainerStarted","Data":"7f97d1174c63c7d8688086c1940ffcbb232b5708659c5ff5d87effa0e47016b1"} Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.803079 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cb9f8bc48-bkgjx" event={"ID":"caca4422-132e-491b-8070-f4e9c4c8ff3a","Type":"ContainerStarted","Data":"0b237005908550424cbfafee9219b4674ac4fe85010dc7948b6fa9cd6ebb7209"} Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.803125 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5cb9f8bc48-bkgjx" Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.805835 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7756dc76-bc61-4bab-a662-649e33ffc929","Type":"ContainerStarted","Data":"cb1568b9a2a9c61856f210121bd2deed8ad2933246e855010e3f5a979991f543"} Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.810840 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bd1dc15-7e73-480b-9599-123a18602d5e","Type":"ContainerStarted","Data":"15f5abc47c564299170d4a6c8faefc9f3b7a4850aee85003eaf162275aceeec0"} Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.837965 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5cb9f8bc48-bkgjx" podStartSLOduration=2.837918002 podStartE2EDuration="2.837918002s" podCreationTimestamp="2025-10-14 13:32:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:32:43.824316472 +0000 UTC m=+1080.672751301" watchObservedRunningTime="2025-10-14 13:32:43.837918002 +0000 UTC m=+1080.686352821" Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.846427 4725 generic.go:334] "Generic (PLEG): container finished" podID="276146f9-c73f-4f38-b4b2-bba0a70221fe" containerID="f735e4309f7a3c3f9c12ec2ca58a134eb2651ae0ed8c85f580f485f24c4005d3" exitCode=0 Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.847622 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-h66x8" event={"ID":"276146f9-c73f-4f38-b4b2-bba0a70221fe","Type":"ContainerDied","Data":"f735e4309f7a3c3f9c12ec2ca58a134eb2651ae0ed8c85f580f485f24c4005d3"} Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.848338 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 14 13:32:43 crc kubenswrapper[4725]: I1014 13:32:43.848699 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.284193 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-fgqxw" Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.346648 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d968db2-d49e-4eee-9927-11fd32b9cd89-combined-ca-bundle\") pod \"7d968db2-d49e-4eee-9927-11fd32b9cd89\" (UID: \"7d968db2-d49e-4eee-9927-11fd32b9cd89\") " Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.346731 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d968db2-d49e-4eee-9927-11fd32b9cd89-scripts\") pod \"7d968db2-d49e-4eee-9927-11fd32b9cd89\" (UID: \"7d968db2-d49e-4eee-9927-11fd32b9cd89\") " Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.346815 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d968db2-d49e-4eee-9927-11fd32b9cd89-logs\") pod \"7d968db2-d49e-4eee-9927-11fd32b9cd89\" (UID: \"7d968db2-d49e-4eee-9927-11fd32b9cd89\") " Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.346934 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85dsd\" (UniqueName: \"kubernetes.io/projected/7d968db2-d49e-4eee-9927-11fd32b9cd89-kube-api-access-85dsd\") pod \"7d968db2-d49e-4eee-9927-11fd32b9cd89\" (UID: \"7d968db2-d49e-4eee-9927-11fd32b9cd89\") " Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.346956 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d968db2-d49e-4eee-9927-11fd32b9cd89-config-data\") pod \"7d968db2-d49e-4eee-9927-11fd32b9cd89\" (UID: \"7d968db2-d49e-4eee-9927-11fd32b9cd89\") " Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.350826 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d968db2-d49e-4eee-9927-11fd32b9cd89-logs" (OuterVolumeSpecName: "logs") pod "7d968db2-d49e-4eee-9927-11fd32b9cd89" (UID: "7d968db2-d49e-4eee-9927-11fd32b9cd89"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.356873 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d968db2-d49e-4eee-9927-11fd32b9cd89-kube-api-access-85dsd" (OuterVolumeSpecName: "kube-api-access-85dsd") pod "7d968db2-d49e-4eee-9927-11fd32b9cd89" (UID: "7d968db2-d49e-4eee-9927-11fd32b9cd89"). InnerVolumeSpecName "kube-api-access-85dsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.359151 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d968db2-d49e-4eee-9927-11fd32b9cd89-scripts" (OuterVolumeSpecName: "scripts") pod "7d968db2-d49e-4eee-9927-11fd32b9cd89" (UID: "7d968db2-d49e-4eee-9927-11fd32b9cd89"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.383951 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d968db2-d49e-4eee-9927-11fd32b9cd89-config-data" (OuterVolumeSpecName: "config-data") pod "7d968db2-d49e-4eee-9927-11fd32b9cd89" (UID: "7d968db2-d49e-4eee-9927-11fd32b9cd89"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.390689 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d968db2-d49e-4eee-9927-11fd32b9cd89-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d968db2-d49e-4eee-9927-11fd32b9cd89" (UID: "7d968db2-d49e-4eee-9927-11fd32b9cd89"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.455516 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d968db2-d49e-4eee-9927-11fd32b9cd89-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.455734 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d968db2-d49e-4eee-9927-11fd32b9cd89-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.455744 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d968db2-d49e-4eee-9927-11fd32b9cd89-logs\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.455753 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85dsd\" (UniqueName: \"kubernetes.io/projected/7d968db2-d49e-4eee-9927-11fd32b9cd89-kube-api-access-85dsd\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.455763 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d968db2-d49e-4eee-9927-11fd32b9cd89-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.516211 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-846bc7f557-srw79"] Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.905263 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-85547dff5b-c66wz"] Oct 14 13:32:44 crc kubenswrapper[4725]: E1014 13:32:44.905726 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d968db2-d49e-4eee-9927-11fd32b9cd89" containerName="placement-db-sync" Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.905741 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d968db2-d49e-4eee-9927-11fd32b9cd89" containerName="placement-db-sync" Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.905894 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d968db2-d49e-4eee-9927-11fd32b9cd89" containerName="placement-db-sync" Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.908112 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85547dff5b-c66wz" Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.913172 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.913422 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.926623 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85547dff5b-c66wz"] Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.932103 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7756dc76-bc61-4bab-a662-649e33ffc929","Type":"ContainerStarted","Data":"692eff8fb6d388e0baa3e13d498259c71a7e5574109db0a7eb814434936bbaa6"} Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.940307 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-fgqxw" event={"ID":"7d968db2-d49e-4eee-9927-11fd32b9cd89","Type":"ContainerDied","Data":"68e80da7fb35c05623b669b14e218ce6bab9d7322d419cb015c7c7e3004db542"} Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.940784 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68e80da7fb35c05623b669b14e218ce6bab9d7322d419cb015c7c7e3004db542" Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.940943 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-fgqxw" Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.958878 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-846bc7f557-srw79" event={"ID":"8a4fb75f-e202-4df3-bb0c-bc889dd701d7","Type":"ContainerStarted","Data":"9107e23806e0efef02c4e74bbd1e7cd089ac8331b468156be5b24988185b630f"} Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.965929 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0163b2fd-7423-4a50-90a8-e312d0b4db22-config-data\") pod \"placement-85547dff5b-c66wz\" (UID: \"0163b2fd-7423-4a50-90a8-e312d0b4db22\") " pod="openstack/placement-85547dff5b-c66wz" Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.966006 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0163b2fd-7423-4a50-90a8-e312d0b4db22-logs\") pod \"placement-85547dff5b-c66wz\" (UID: \"0163b2fd-7423-4a50-90a8-e312d0b4db22\") " pod="openstack/placement-85547dff5b-c66wz" Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.966087 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0163b2fd-7423-4a50-90a8-e312d0b4db22-public-tls-certs\") pod \"placement-85547dff5b-c66wz\" (UID: \"0163b2fd-7423-4a50-90a8-e312d0b4db22\") " pod="openstack/placement-85547dff5b-c66wz" Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.966078 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-h66x8" event={"ID":"276146f9-c73f-4f38-b4b2-bba0a70221fe","Type":"ContainerStarted","Data":"f12bcf64a929fba6d482babdf453939e192c56ab2f17357aceac35899c9a3ab5"} Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.966142 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0163b2fd-7423-4a50-90a8-e312d0b4db22-scripts\") pod \"placement-85547dff5b-c66wz\" (UID: \"0163b2fd-7423-4a50-90a8-e312d0b4db22\") " pod="openstack/placement-85547dff5b-c66wz" Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.966179 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84b966f6c9-h66x8" Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.966195 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0163b2fd-7423-4a50-90a8-e312d0b4db22-internal-tls-certs\") pod \"placement-85547dff5b-c66wz\" (UID: \"0163b2fd-7423-4a50-90a8-e312d0b4db22\") " pod="openstack/placement-85547dff5b-c66wz" Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.966256 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0163b2fd-7423-4a50-90a8-e312d0b4db22-combined-ca-bundle\") pod \"placement-85547dff5b-c66wz\" (UID: \"0163b2fd-7423-4a50-90a8-e312d0b4db22\") " pod="openstack/placement-85547dff5b-c66wz" Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.966305 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6rxm\" (UniqueName: \"kubernetes.io/projected/0163b2fd-7423-4a50-90a8-e312d0b4db22-kube-api-access-h6rxm\") pod \"placement-85547dff5b-c66wz\" (UID: \"0163b2fd-7423-4a50-90a8-e312d0b4db22\") " pod="openstack/placement-85547dff5b-c66wz" Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.970485 4725 generic.go:334] "Generic (PLEG): container finished" podID="22b0bf13-3251-446f-946b-273f89349427" containerID="9de7d1d9280f5273ba36dc02ca1461284edcba1bfbe3ccada9299d9b936c41d4" exitCode=0 Oct 14 13:32:44 crc kubenswrapper[4725]: I1014 13:32:44.971289 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tlq7r" event={"ID":"22b0bf13-3251-446f-946b-273f89349427","Type":"ContainerDied","Data":"9de7d1d9280f5273ba36dc02ca1461284edcba1bfbe3ccada9299d9b936c41d4"} Oct 14 13:32:45 crc kubenswrapper[4725]: I1014 13:32:45.013010 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84b966f6c9-h66x8" podStartSLOduration=5.012979012 podStartE2EDuration="5.012979012s" podCreationTimestamp="2025-10-14 13:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:32:44.988424164 +0000 UTC m=+1081.836858973" watchObservedRunningTime="2025-10-14 13:32:45.012979012 +0000 UTC m=+1081.861413841" Oct 14 13:32:45 crc kubenswrapper[4725]: I1014 13:32:45.068986 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0163b2fd-7423-4a50-90a8-e312d0b4db22-logs\") pod \"placement-85547dff5b-c66wz\" (UID: \"0163b2fd-7423-4a50-90a8-e312d0b4db22\") " pod="openstack/placement-85547dff5b-c66wz" Oct 14 13:32:45 crc kubenswrapper[4725]: I1014 13:32:45.069164 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0163b2fd-7423-4a50-90a8-e312d0b4db22-public-tls-certs\") pod \"placement-85547dff5b-c66wz\" (UID: \"0163b2fd-7423-4a50-90a8-e312d0b4db22\") " pod="openstack/placement-85547dff5b-c66wz" Oct 14 13:32:45 crc kubenswrapper[4725]: I1014 13:32:45.069224 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0163b2fd-7423-4a50-90a8-e312d0b4db22-scripts\") pod \"placement-85547dff5b-c66wz\" (UID: \"0163b2fd-7423-4a50-90a8-e312d0b4db22\") " pod="openstack/placement-85547dff5b-c66wz" Oct 14 13:32:45 crc kubenswrapper[4725]: I1014 13:32:45.069281 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0163b2fd-7423-4a50-90a8-e312d0b4db22-internal-tls-certs\") pod \"placement-85547dff5b-c66wz\" (UID: \"0163b2fd-7423-4a50-90a8-e312d0b4db22\") " pod="openstack/placement-85547dff5b-c66wz" Oct 14 13:32:45 crc kubenswrapper[4725]: I1014 13:32:45.070134 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0163b2fd-7423-4a50-90a8-e312d0b4db22-combined-ca-bundle\") pod \"placement-85547dff5b-c66wz\" (UID: \"0163b2fd-7423-4a50-90a8-e312d0b4db22\") " pod="openstack/placement-85547dff5b-c66wz" Oct 14 13:32:45 crc kubenswrapper[4725]: I1014 13:32:45.070674 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0163b2fd-7423-4a50-90a8-e312d0b4db22-logs\") pod \"placement-85547dff5b-c66wz\" (UID: \"0163b2fd-7423-4a50-90a8-e312d0b4db22\") " pod="openstack/placement-85547dff5b-c66wz" Oct 14 13:32:45 crc kubenswrapper[4725]: I1014 13:32:45.076371 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0163b2fd-7423-4a50-90a8-e312d0b4db22-public-tls-certs\") pod \"placement-85547dff5b-c66wz\" (UID: \"0163b2fd-7423-4a50-90a8-e312d0b4db22\") " pod="openstack/placement-85547dff5b-c66wz" Oct 14 13:32:45 crc kubenswrapper[4725]: I1014 13:32:45.076575 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6rxm\" (UniqueName: \"kubernetes.io/projected/0163b2fd-7423-4a50-90a8-e312d0b4db22-kube-api-access-h6rxm\") pod \"placement-85547dff5b-c66wz\" (UID: \"0163b2fd-7423-4a50-90a8-e312d0b4db22\") " pod="openstack/placement-85547dff5b-c66wz" Oct 14 13:32:45 crc kubenswrapper[4725]: I1014 13:32:45.076845 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0163b2fd-7423-4a50-90a8-e312d0b4db22-config-data\") pod \"placement-85547dff5b-c66wz\" (UID: \"0163b2fd-7423-4a50-90a8-e312d0b4db22\") " pod="openstack/placement-85547dff5b-c66wz" Oct 14 13:32:45 crc kubenswrapper[4725]: I1014 13:32:45.079864 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0163b2fd-7423-4a50-90a8-e312d0b4db22-combined-ca-bundle\") pod \"placement-85547dff5b-c66wz\" (UID: \"0163b2fd-7423-4a50-90a8-e312d0b4db22\") " pod="openstack/placement-85547dff5b-c66wz" Oct 14 13:32:45 crc kubenswrapper[4725]: I1014 13:32:45.081540 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0163b2fd-7423-4a50-90a8-e312d0b4db22-config-data\") pod \"placement-85547dff5b-c66wz\" (UID: \"0163b2fd-7423-4a50-90a8-e312d0b4db22\") " pod="openstack/placement-85547dff5b-c66wz" Oct 14 13:32:45 crc kubenswrapper[4725]: I1014 13:32:45.081792 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0163b2fd-7423-4a50-90a8-e312d0b4db22-internal-tls-certs\") pod \"placement-85547dff5b-c66wz\" (UID: \"0163b2fd-7423-4a50-90a8-e312d0b4db22\") " pod="openstack/placement-85547dff5b-c66wz" Oct 14 13:32:45 crc kubenswrapper[4725]: I1014 13:32:45.082732 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0163b2fd-7423-4a50-90a8-e312d0b4db22-scripts\") pod \"placement-85547dff5b-c66wz\" (UID: \"0163b2fd-7423-4a50-90a8-e312d0b4db22\") " pod="openstack/placement-85547dff5b-c66wz" Oct 14 13:32:45 crc kubenswrapper[4725]: I1014 13:32:45.092000 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6rxm\" (UniqueName: \"kubernetes.io/projected/0163b2fd-7423-4a50-90a8-e312d0b4db22-kube-api-access-h6rxm\") pod \"placement-85547dff5b-c66wz\" (UID: \"0163b2fd-7423-4a50-90a8-e312d0b4db22\") " pod="openstack/placement-85547dff5b-c66wz" Oct 14 13:32:45 crc kubenswrapper[4725]: I1014 13:32:45.233212 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85547dff5b-c66wz" Oct 14 13:32:45 crc kubenswrapper[4725]: I1014 13:32:45.809656 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85547dff5b-c66wz"] Oct 14 13:32:45 crc kubenswrapper[4725]: I1014 13:32:45.985621 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7756dc76-bc61-4bab-a662-649e33ffc929","Type":"ContainerStarted","Data":"41a729d606fa02f4367514aee139728508c4f67c66425f2a882f5938d0001735"} Oct 14 13:32:45 crc kubenswrapper[4725]: I1014 13:32:45.991045 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-846bc7f557-srw79" event={"ID":"8a4fb75f-e202-4df3-bb0c-bc889dd701d7","Type":"ContainerStarted","Data":"ef6a679764eca101e8f8e5abbad78f3edb2e632098176e907ea43e65bf0ad48f"} Oct 14 13:32:45 crc kubenswrapper[4725]: I1014 13:32:45.991114 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-846bc7f557-srw79" event={"ID":"8a4fb75f-e202-4df3-bb0c-bc889dd701d7","Type":"ContainerStarted","Data":"e78a56a6338e41e9572efe092ec6e40b59dc579c63471a66019cc3d11bfd39c4"} Oct 14 13:32:45 crc kubenswrapper[4725]: I1014 13:32:45.992226 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-846bc7f557-srw79" Oct 14 13:32:45 crc kubenswrapper[4725]: I1014 13:32:45.994952 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85547dff5b-c66wz" event={"ID":"0163b2fd-7423-4a50-90a8-e312d0b4db22","Type":"ContainerStarted","Data":"1629d24cfcf49ae914d9fcdaa02c899ca19ce48d054722adf632bbaeeb64a3fd"} Oct 14 13:32:46 crc kubenswrapper[4725]: I1014 13:32:46.006671 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.006648259 podStartE2EDuration="5.006648259s" podCreationTimestamp="2025-10-14 13:32:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:32:46.000759148 +0000 UTC m=+1082.849193967" watchObservedRunningTime="2025-10-14 13:32:46.006648259 +0000 UTC m=+1082.855083068" Oct 14 13:32:46 crc kubenswrapper[4725]: I1014 13:32:46.037087 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-846bc7f557-srw79" podStartSLOduration=3.037067555 podStartE2EDuration="3.037067555s" podCreationTimestamp="2025-10-14 13:32:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:32:46.024853074 +0000 UTC m=+1082.873287883" watchObservedRunningTime="2025-10-14 13:32:46.037067555 +0000 UTC m=+1082.885502364" Oct 14 13:32:48 crc kubenswrapper[4725]: I1014 13:32:48.769882 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tlq7r" Oct 14 13:32:48 crc kubenswrapper[4725]: I1014 13:32:48.846286 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/22b0bf13-3251-446f-946b-273f89349427-fernet-keys\") pod \"22b0bf13-3251-446f-946b-273f89349427\" (UID: \"22b0bf13-3251-446f-946b-273f89349427\") " Oct 14 13:32:48 crc kubenswrapper[4725]: I1014 13:32:48.846331 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22b0bf13-3251-446f-946b-273f89349427-config-data\") pod \"22b0bf13-3251-446f-946b-273f89349427\" (UID: \"22b0bf13-3251-446f-946b-273f89349427\") " Oct 14 13:32:48 crc kubenswrapper[4725]: I1014 13:32:48.846380 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64w96\" (UniqueName: \"kubernetes.io/projected/22b0bf13-3251-446f-946b-273f89349427-kube-api-access-64w96\") pod \"22b0bf13-3251-446f-946b-273f89349427\" (UID: \"22b0bf13-3251-446f-946b-273f89349427\") " Oct 14 13:32:48 crc kubenswrapper[4725]: I1014 13:32:48.846424 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22b0bf13-3251-446f-946b-273f89349427-combined-ca-bundle\") pod \"22b0bf13-3251-446f-946b-273f89349427\" (UID: \"22b0bf13-3251-446f-946b-273f89349427\") " Oct 14 13:32:48 crc kubenswrapper[4725]: I1014 13:32:48.846830 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/22b0bf13-3251-446f-946b-273f89349427-credential-keys\") pod \"22b0bf13-3251-446f-946b-273f89349427\" (UID: \"22b0bf13-3251-446f-946b-273f89349427\") " Oct 14 13:32:48 crc kubenswrapper[4725]: I1014 13:32:48.846883 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22b0bf13-3251-446f-946b-273f89349427-scripts\") pod \"22b0bf13-3251-446f-946b-273f89349427\" (UID: \"22b0bf13-3251-446f-946b-273f89349427\") " Oct 14 13:32:48 crc kubenswrapper[4725]: I1014 13:32:48.855057 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22b0bf13-3251-446f-946b-273f89349427-scripts" (OuterVolumeSpecName: "scripts") pod "22b0bf13-3251-446f-946b-273f89349427" (UID: "22b0bf13-3251-446f-946b-273f89349427"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:48 crc kubenswrapper[4725]: I1014 13:32:48.857591 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22b0bf13-3251-446f-946b-273f89349427-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "22b0bf13-3251-446f-946b-273f89349427" (UID: "22b0bf13-3251-446f-946b-273f89349427"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:48 crc kubenswrapper[4725]: I1014 13:32:48.857673 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22b0bf13-3251-446f-946b-273f89349427-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "22b0bf13-3251-446f-946b-273f89349427" (UID: "22b0bf13-3251-446f-946b-273f89349427"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:48 crc kubenswrapper[4725]: I1014 13:32:48.858634 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22b0bf13-3251-446f-946b-273f89349427-kube-api-access-64w96" (OuterVolumeSpecName: "kube-api-access-64w96") pod "22b0bf13-3251-446f-946b-273f89349427" (UID: "22b0bf13-3251-446f-946b-273f89349427"). InnerVolumeSpecName "kube-api-access-64w96". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:32:48 crc kubenswrapper[4725]: I1014 13:32:48.878585 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22b0bf13-3251-446f-946b-273f89349427-config-data" (OuterVolumeSpecName: "config-data") pod "22b0bf13-3251-446f-946b-273f89349427" (UID: "22b0bf13-3251-446f-946b-273f89349427"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:48 crc kubenswrapper[4725]: I1014 13:32:48.881803 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22b0bf13-3251-446f-946b-273f89349427-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22b0bf13-3251-446f-946b-273f89349427" (UID: "22b0bf13-3251-446f-946b-273f89349427"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:48 crc kubenswrapper[4725]: I1014 13:32:48.950625 4725 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/22b0bf13-3251-446f-946b-273f89349427-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:48 crc kubenswrapper[4725]: I1014 13:32:48.950669 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22b0bf13-3251-446f-946b-273f89349427-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:48 crc kubenswrapper[4725]: I1014 13:32:48.950683 4725 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/22b0bf13-3251-446f-946b-273f89349427-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:48 crc kubenswrapper[4725]: I1014 13:32:48.950697 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22b0bf13-3251-446f-946b-273f89349427-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:48 crc kubenswrapper[4725]: I1014 13:32:48.950710 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64w96\" (UniqueName: \"kubernetes.io/projected/22b0bf13-3251-446f-946b-273f89349427-kube-api-access-64w96\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:48 crc kubenswrapper[4725]: I1014 13:32:48.950734 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22b0bf13-3251-446f-946b-273f89349427-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:49 crc kubenswrapper[4725]: I1014 13:32:49.034822 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bd1dc15-7e73-480b-9599-123a18602d5e","Type":"ContainerStarted","Data":"055a65bd493fbbae350c73b4f7f66116107548e7b4f0511a73cfac32e1a9bc58"} Oct 14 13:32:49 crc kubenswrapper[4725]: I1014 13:32:49.045597 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85547dff5b-c66wz" event={"ID":"0163b2fd-7423-4a50-90a8-e312d0b4db22","Type":"ContainerStarted","Data":"1be17035bd528cab528047f944eec9989f034d645e1ae6535c26a951c19a78e1"} Oct 14 13:32:49 crc kubenswrapper[4725]: I1014 13:32:49.054637 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tlq7r" event={"ID":"22b0bf13-3251-446f-946b-273f89349427","Type":"ContainerDied","Data":"63ea4889fdd1cf7ea19670415274686ecefdd907f156d76b7fdf6e60e92f63df"} Oct 14 13:32:49 crc kubenswrapper[4725]: I1014 13:32:49.054787 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63ea4889fdd1cf7ea19670415274686ecefdd907f156d76b7fdf6e60e92f63df" Oct 14 13:32:49 crc kubenswrapper[4725]: I1014 13:32:49.054980 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tlq7r" Oct 14 13:32:49 crc kubenswrapper[4725]: I1014 13:32:49.713582 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-9d5c84b44-vssnm" Oct 14 13:32:49 crc kubenswrapper[4725]: I1014 13:32:49.713624 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9d5c84b44-vssnm" Oct 14 13:32:49 crc kubenswrapper[4725]: I1014 13:32:49.878802 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-974b44687-rnvdw"] Oct 14 13:32:49 crc kubenswrapper[4725]: E1014 13:32:49.879393 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22b0bf13-3251-446f-946b-273f89349427" containerName="keystone-bootstrap" Oct 14 13:32:49 crc kubenswrapper[4725]: I1014 13:32:49.879405 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="22b0bf13-3251-446f-946b-273f89349427" containerName="keystone-bootstrap" Oct 14 13:32:49 crc kubenswrapper[4725]: I1014 13:32:49.879611 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="22b0bf13-3251-446f-946b-273f89349427" containerName="keystone-bootstrap" Oct 14 13:32:49 crc kubenswrapper[4725]: I1014 13:32:49.880260 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-974b44687-rnvdw" Oct 14 13:32:49 crc kubenswrapper[4725]: I1014 13:32:49.881821 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Oct 14 13:32:49 crc kubenswrapper[4725]: I1014 13:32:49.882027 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Oct 14 13:32:49 crc kubenswrapper[4725]: I1014 13:32:49.882279 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 14 13:32:49 crc kubenswrapper[4725]: I1014 13:32:49.882493 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 14 13:32:49 crc kubenswrapper[4725]: I1014 13:32:49.882602 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 14 13:32:49 crc kubenswrapper[4725]: I1014 13:32:49.896029 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-h7l4t" Oct 14 13:32:49 crc kubenswrapper[4725]: I1014 13:32:49.939831 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-974b44687-rnvdw"] Oct 14 13:32:49 crc kubenswrapper[4725]: I1014 13:32:49.969782 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/425d7189-b6f4-4bdf-8e0c-7eed10df706d-internal-tls-certs\") pod \"keystone-974b44687-rnvdw\" (UID: \"425d7189-b6f4-4bdf-8e0c-7eed10df706d\") " pod="openstack/keystone-974b44687-rnvdw" Oct 14 13:32:49 crc kubenswrapper[4725]: I1014 13:32:49.970086 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/425d7189-b6f4-4bdf-8e0c-7eed10df706d-combined-ca-bundle\") pod \"keystone-974b44687-rnvdw\" (UID: \"425d7189-b6f4-4bdf-8e0c-7eed10df706d\") " pod="openstack/keystone-974b44687-rnvdw" Oct 14 13:32:49 crc kubenswrapper[4725]: I1014 13:32:49.970190 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/425d7189-b6f4-4bdf-8e0c-7eed10df706d-config-data\") pod \"keystone-974b44687-rnvdw\" (UID: \"425d7189-b6f4-4bdf-8e0c-7eed10df706d\") " pod="openstack/keystone-974b44687-rnvdw" Oct 14 13:32:49 crc kubenswrapper[4725]: I1014 13:32:49.970301 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/425d7189-b6f4-4bdf-8e0c-7eed10df706d-scripts\") pod \"keystone-974b44687-rnvdw\" (UID: \"425d7189-b6f4-4bdf-8e0c-7eed10df706d\") " pod="openstack/keystone-974b44687-rnvdw" Oct 14 13:32:49 crc kubenswrapper[4725]: I1014 13:32:49.970508 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/425d7189-b6f4-4bdf-8e0c-7eed10df706d-public-tls-certs\") pod \"keystone-974b44687-rnvdw\" (UID: \"425d7189-b6f4-4bdf-8e0c-7eed10df706d\") " pod="openstack/keystone-974b44687-rnvdw" Oct 14 13:32:49 crc kubenswrapper[4725]: I1014 13:32:49.970571 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/425d7189-b6f4-4bdf-8e0c-7eed10df706d-fernet-keys\") pod \"keystone-974b44687-rnvdw\" (UID: \"425d7189-b6f4-4bdf-8e0c-7eed10df706d\") " pod="openstack/keystone-974b44687-rnvdw" Oct 14 13:32:49 crc kubenswrapper[4725]: I1014 13:32:49.970618 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/425d7189-b6f4-4bdf-8e0c-7eed10df706d-credential-keys\") pod \"keystone-974b44687-rnvdw\" (UID: \"425d7189-b6f4-4bdf-8e0c-7eed10df706d\") " pod="openstack/keystone-974b44687-rnvdw" Oct 14 13:32:49 crc kubenswrapper[4725]: I1014 13:32:49.970693 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pddsk\" (UniqueName: \"kubernetes.io/projected/425d7189-b6f4-4bdf-8e0c-7eed10df706d-kube-api-access-pddsk\") pod \"keystone-974b44687-rnvdw\" (UID: \"425d7189-b6f4-4bdf-8e0c-7eed10df706d\") " pod="openstack/keystone-974b44687-rnvdw" Oct 14 13:32:50 crc kubenswrapper[4725]: I1014 13:32:50.042140 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7cdf854644-xbv6p" Oct 14 13:32:50 crc kubenswrapper[4725]: I1014 13:32:50.042181 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7cdf854644-xbv6p" Oct 14 13:32:50 crc kubenswrapper[4725]: I1014 13:32:50.063088 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85547dff5b-c66wz" event={"ID":"0163b2fd-7423-4a50-90a8-e312d0b4db22","Type":"ContainerStarted","Data":"19807bb41a0ca03834145a596fcafb3c6e06be14314e2c1eef301d46fd7d112c"} Oct 14 13:32:50 crc kubenswrapper[4725]: I1014 13:32:50.064124 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85547dff5b-c66wz" Oct 14 13:32:50 crc kubenswrapper[4725]: I1014 13:32:50.064154 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-85547dff5b-c66wz" Oct 14 13:32:50 crc kubenswrapper[4725]: I1014 13:32:50.072388 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/425d7189-b6f4-4bdf-8e0c-7eed10df706d-internal-tls-certs\") pod \"keystone-974b44687-rnvdw\" (UID: \"425d7189-b6f4-4bdf-8e0c-7eed10df706d\") " pod="openstack/keystone-974b44687-rnvdw" Oct 14 13:32:50 crc kubenswrapper[4725]: I1014 13:32:50.072840 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/425d7189-b6f4-4bdf-8e0c-7eed10df706d-combined-ca-bundle\") pod \"keystone-974b44687-rnvdw\" (UID: \"425d7189-b6f4-4bdf-8e0c-7eed10df706d\") " pod="openstack/keystone-974b44687-rnvdw" Oct 14 13:32:50 crc kubenswrapper[4725]: I1014 13:32:50.073015 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/425d7189-b6f4-4bdf-8e0c-7eed10df706d-config-data\") pod \"keystone-974b44687-rnvdw\" (UID: \"425d7189-b6f4-4bdf-8e0c-7eed10df706d\") " pod="openstack/keystone-974b44687-rnvdw" Oct 14 13:32:50 crc kubenswrapper[4725]: I1014 13:32:50.073041 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/425d7189-b6f4-4bdf-8e0c-7eed10df706d-scripts\") pod \"keystone-974b44687-rnvdw\" (UID: \"425d7189-b6f4-4bdf-8e0c-7eed10df706d\") " pod="openstack/keystone-974b44687-rnvdw" Oct 14 13:32:50 crc kubenswrapper[4725]: I1014 13:32:50.073427 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/425d7189-b6f4-4bdf-8e0c-7eed10df706d-public-tls-certs\") pod \"keystone-974b44687-rnvdw\" (UID: \"425d7189-b6f4-4bdf-8e0c-7eed10df706d\") " pod="openstack/keystone-974b44687-rnvdw" Oct 14 13:32:50 crc kubenswrapper[4725]: I1014 13:32:50.073477 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/425d7189-b6f4-4bdf-8e0c-7eed10df706d-fernet-keys\") pod \"keystone-974b44687-rnvdw\" (UID: \"425d7189-b6f4-4bdf-8e0c-7eed10df706d\") " pod="openstack/keystone-974b44687-rnvdw" Oct 14 13:32:50 crc kubenswrapper[4725]: I1014 13:32:50.073512 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/425d7189-b6f4-4bdf-8e0c-7eed10df706d-credential-keys\") pod \"keystone-974b44687-rnvdw\" (UID: \"425d7189-b6f4-4bdf-8e0c-7eed10df706d\") " pod="openstack/keystone-974b44687-rnvdw" Oct 14 13:32:50 crc kubenswrapper[4725]: I1014 13:32:50.073534 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pddsk\" (UniqueName: \"kubernetes.io/projected/425d7189-b6f4-4bdf-8e0c-7eed10df706d-kube-api-access-pddsk\") pod \"keystone-974b44687-rnvdw\" (UID: \"425d7189-b6f4-4bdf-8e0c-7eed10df706d\") " pod="openstack/keystone-974b44687-rnvdw" Oct 14 13:32:50 crc kubenswrapper[4725]: I1014 13:32:50.080290 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/425d7189-b6f4-4bdf-8e0c-7eed10df706d-internal-tls-certs\") pod \"keystone-974b44687-rnvdw\" (UID: \"425d7189-b6f4-4bdf-8e0c-7eed10df706d\") " pod="openstack/keystone-974b44687-rnvdw" Oct 14 13:32:50 crc kubenswrapper[4725]: I1014 13:32:50.080627 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/425d7189-b6f4-4bdf-8e0c-7eed10df706d-credential-keys\") pod \"keystone-974b44687-rnvdw\" (UID: \"425d7189-b6f4-4bdf-8e0c-7eed10df706d\") " pod="openstack/keystone-974b44687-rnvdw" Oct 14 13:32:50 crc kubenswrapper[4725]: I1014 13:32:50.081127 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/425d7189-b6f4-4bdf-8e0c-7eed10df706d-combined-ca-bundle\") pod \"keystone-974b44687-rnvdw\" (UID: \"425d7189-b6f4-4bdf-8e0c-7eed10df706d\") " pod="openstack/keystone-974b44687-rnvdw" Oct 14 13:32:50 crc kubenswrapper[4725]: I1014 13:32:50.082216 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/425d7189-b6f4-4bdf-8e0c-7eed10df706d-public-tls-certs\") pod \"keystone-974b44687-rnvdw\" (UID: \"425d7189-b6f4-4bdf-8e0c-7eed10df706d\") " pod="openstack/keystone-974b44687-rnvdw" Oct 14 13:32:50 crc kubenswrapper[4725]: I1014 13:32:50.082783 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/425d7189-b6f4-4bdf-8e0c-7eed10df706d-fernet-keys\") pod \"keystone-974b44687-rnvdw\" (UID: \"425d7189-b6f4-4bdf-8e0c-7eed10df706d\") " pod="openstack/keystone-974b44687-rnvdw" Oct 14 13:32:50 crc kubenswrapper[4725]: I1014 13:32:50.083485 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/425d7189-b6f4-4bdf-8e0c-7eed10df706d-config-data\") pod \"keystone-974b44687-rnvdw\" (UID: \"425d7189-b6f4-4bdf-8e0c-7eed10df706d\") " pod="openstack/keystone-974b44687-rnvdw" Oct 14 13:32:50 crc kubenswrapper[4725]: I1014 13:32:50.092928 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/425d7189-b6f4-4bdf-8e0c-7eed10df706d-scripts\") pod \"keystone-974b44687-rnvdw\" (UID: \"425d7189-b6f4-4bdf-8e0c-7eed10df706d\") " pod="openstack/keystone-974b44687-rnvdw" Oct 14 13:32:50 crc kubenswrapper[4725]: I1014 13:32:50.098740 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-85547dff5b-c66wz" podStartSLOduration=6.098720186 podStartE2EDuration="6.098720186s" podCreationTimestamp="2025-10-14 13:32:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:32:50.096588808 +0000 UTC m=+1086.945023647" watchObservedRunningTime="2025-10-14 13:32:50.098720186 +0000 UTC m=+1086.947154995" Oct 14 13:32:50 crc kubenswrapper[4725]: I1014 13:32:50.111104 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pddsk\" (UniqueName: \"kubernetes.io/projected/425d7189-b6f4-4bdf-8e0c-7eed10df706d-kube-api-access-pddsk\") pod \"keystone-974b44687-rnvdw\" (UID: \"425d7189-b6f4-4bdf-8e0c-7eed10df706d\") " pod="openstack/keystone-974b44687-rnvdw" Oct 14 13:32:50 crc kubenswrapper[4725]: I1014 13:32:50.195412 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-974b44687-rnvdw" Oct 14 13:32:50 crc kubenswrapper[4725]: I1014 13:32:50.668229 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-974b44687-rnvdw"] Oct 14 13:32:51 crc kubenswrapper[4725]: I1014 13:32:51.090496 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-974b44687-rnvdw" event={"ID":"425d7189-b6f4-4bdf-8e0c-7eed10df706d","Type":"ContainerStarted","Data":"ba6b77dac1a1e720f705688221fea7b6a5267528a9cf5df2043858f59b1abad6"} Oct 14 13:32:51 crc kubenswrapper[4725]: I1014 13:32:51.090803 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-974b44687-rnvdw" event={"ID":"425d7189-b6f4-4bdf-8e0c-7eed10df706d","Type":"ContainerStarted","Data":"dc8a41450ad74e85ba4afcea2fa0efb03ef6b309497ca1bd097559a46bd3f7ab"} Oct 14 13:32:51 crc kubenswrapper[4725]: I1014 13:32:51.090986 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-974b44687-rnvdw" Oct 14 13:32:51 crc kubenswrapper[4725]: I1014 13:32:51.126339 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-974b44687-rnvdw" podStartSLOduration=2.126319155 podStartE2EDuration="2.126319155s" podCreationTimestamp="2025-10-14 13:32:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:32:51.122776309 +0000 UTC m=+1087.971211118" watchObservedRunningTime="2025-10-14 13:32:51.126319155 +0000 UTC m=+1087.974753964" Oct 14 13:32:51 crc kubenswrapper[4725]: I1014 13:32:51.330647 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84b966f6c9-h66x8" Oct 14 13:32:51 crc kubenswrapper[4725]: I1014 13:32:51.399032 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-n8mrx"] Oct 14 13:32:51 crc kubenswrapper[4725]: I1014 13:32:51.399603 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b5c85b87-n8mrx" podUID="59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3" containerName="dnsmasq-dns" containerID="cri-o://af525c32cc8dc653c99574eba7c39349958847bcac7fd9f72c617fe843a10b49" gracePeriod=10 Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.031912 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-n8mrx" Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.104016 4725 generic.go:334] "Generic (PLEG): container finished" podID="59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3" containerID="af525c32cc8dc653c99574eba7c39349958847bcac7fd9f72c617fe843a10b49" exitCode=0 Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.104089 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-n8mrx" event={"ID":"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3","Type":"ContainerDied","Data":"af525c32cc8dc653c99574eba7c39349958847bcac7fd9f72c617fe843a10b49"} Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.104150 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-n8mrx" event={"ID":"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3","Type":"ContainerDied","Data":"0b1777a95565bb6daf4e1b4dcf10bd1efc79e415420ef2b141d2b2b2f0e4bcc1"} Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.104167 4725 scope.go:117] "RemoveContainer" containerID="af525c32cc8dc653c99574eba7c39349958847bcac7fd9f72c617fe843a10b49" Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.104879 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-n8mrx" Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.128569 4725 scope.go:117] "RemoveContainer" containerID="6e8b06a8e0bc982ee9cac96a1a9a8b52f5e3529fb3d744de50abd9baded63ffa" Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.184879 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.184928 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.218084 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-ovsdbserver-sb\") pod \"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3\" (UID: \"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3\") " Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.218161 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-config\") pod \"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3\" (UID: \"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3\") " Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.218198 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-ovsdbserver-nb\") pod \"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3\" (UID: \"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3\") " Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.218273 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-dns-swift-storage-0\") pod \"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3\" (UID: \"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3\") " Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.218294 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsxfr\" (UniqueName: \"kubernetes.io/projected/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-kube-api-access-xsxfr\") pod \"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3\" (UID: \"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3\") " Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.218391 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-dns-svc\") pod \"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3\" (UID: \"59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3\") " Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.233529 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-kube-api-access-xsxfr" (OuterVolumeSpecName: "kube-api-access-xsxfr") pod "59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3" (UID: "59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3"). InnerVolumeSpecName "kube-api-access-xsxfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.261883 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.262595 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.292813 4725 scope.go:117] "RemoveContainer" containerID="af525c32cc8dc653c99574eba7c39349958847bcac7fd9f72c617fe843a10b49" Oct 14 13:32:52 crc kubenswrapper[4725]: E1014 13:32:52.299628 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af525c32cc8dc653c99574eba7c39349958847bcac7fd9f72c617fe843a10b49\": container with ID starting with af525c32cc8dc653c99574eba7c39349958847bcac7fd9f72c617fe843a10b49 not found: ID does not exist" containerID="af525c32cc8dc653c99574eba7c39349958847bcac7fd9f72c617fe843a10b49" Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.299910 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af525c32cc8dc653c99574eba7c39349958847bcac7fd9f72c617fe843a10b49"} err="failed to get container status \"af525c32cc8dc653c99574eba7c39349958847bcac7fd9f72c617fe843a10b49\": rpc error: code = NotFound desc = could not find container \"af525c32cc8dc653c99574eba7c39349958847bcac7fd9f72c617fe843a10b49\": container with ID starting with af525c32cc8dc653c99574eba7c39349958847bcac7fd9f72c617fe843a10b49 not found: ID does not exist" Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.300044 4725 scope.go:117] "RemoveContainer" containerID="6e8b06a8e0bc982ee9cac96a1a9a8b52f5e3529fb3d744de50abd9baded63ffa" Oct 14 13:32:52 crc kubenswrapper[4725]: E1014 13:32:52.301067 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e8b06a8e0bc982ee9cac96a1a9a8b52f5e3529fb3d744de50abd9baded63ffa\": container with ID starting with 6e8b06a8e0bc982ee9cac96a1a9a8b52f5e3529fb3d744de50abd9baded63ffa not found: ID does not exist" containerID="6e8b06a8e0bc982ee9cac96a1a9a8b52f5e3529fb3d744de50abd9baded63ffa" Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.301126 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e8b06a8e0bc982ee9cac96a1a9a8b52f5e3529fb3d744de50abd9baded63ffa"} err="failed to get container status \"6e8b06a8e0bc982ee9cac96a1a9a8b52f5e3529fb3d744de50abd9baded63ffa\": rpc error: code = NotFound desc = could not find container \"6e8b06a8e0bc982ee9cac96a1a9a8b52f5e3529fb3d744de50abd9baded63ffa\": container with ID starting with 6e8b06a8e0bc982ee9cac96a1a9a8b52f5e3529fb3d744de50abd9baded63ffa not found: ID does not exist" Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.321992 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3" (UID: "59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.323104 4725 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.323127 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsxfr\" (UniqueName: \"kubernetes.io/projected/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-kube-api-access-xsxfr\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.338700 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3" (UID: "59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.358272 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3" (UID: "59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.364006 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-config" (OuterVolumeSpecName: "config") pod "59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3" (UID: "59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.379080 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3" (UID: "59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.425083 4725 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.425121 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.425138 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.425211 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.439122 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-n8mrx"] Oct 14 13:32:52 crc kubenswrapper[4725]: I1014 13:32:52.446364 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-n8mrx"] Oct 14 13:32:53 crc kubenswrapper[4725]: I1014 13:32:53.119166 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 14 13:32:53 crc kubenswrapper[4725]: I1014 13:32:53.119551 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 14 13:32:53 crc kubenswrapper[4725]: I1014 13:32:53.936619 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3" path="/var/lib/kubelet/pods/59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3/volumes" Oct 14 13:32:54 crc kubenswrapper[4725]: I1014 13:32:54.142381 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rwcgw" event={"ID":"1f19eb4e-4359-4092-a050-e1d695fbb891","Type":"ContainerStarted","Data":"f3ae748cd67304102fa70e79a35a19fab53a8e568b01e0cba3808f45c1d7d2d7"} Oct 14 13:32:54 crc kubenswrapper[4725]: I1014 13:32:54.148619 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-gzbkx" event={"ID":"ec415043-bd33-4ab3-8437-28eda0458656","Type":"ContainerStarted","Data":"e0dcf1caeb71663ee972c6f823679124c497c498e4d1b99091b1fb84b43bfee0"} Oct 14 13:32:54 crc kubenswrapper[4725]: I1014 13:32:54.158725 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-rwcgw" podStartSLOduration=1.635757335 podStartE2EDuration="43.158706382s" podCreationTimestamp="2025-10-14 13:32:11 +0000 UTC" firstStartedPulling="2025-10-14 13:32:11.929043283 +0000 UTC m=+1048.777478092" lastFinishedPulling="2025-10-14 13:32:53.45199233 +0000 UTC m=+1090.300427139" observedRunningTime="2025-10-14 13:32:54.15757041 +0000 UTC m=+1091.006005229" watchObservedRunningTime="2025-10-14 13:32:54.158706382 +0000 UTC m=+1091.007141191" Oct 14 13:32:55 crc kubenswrapper[4725]: I1014 13:32:55.300035 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 14 13:32:55 crc kubenswrapper[4725]: I1014 13:32:55.336885 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-gzbkx" podStartSLOduration=4.750871491 podStartE2EDuration="45.336870625s" podCreationTimestamp="2025-10-14 13:32:10 +0000 UTC" firstStartedPulling="2025-10-14 13:32:11.951165884 +0000 UTC m=+1048.799600693" lastFinishedPulling="2025-10-14 13:32:52.537165018 +0000 UTC m=+1089.385599827" observedRunningTime="2025-10-14 13:32:54.182260662 +0000 UTC m=+1091.030695461" watchObservedRunningTime="2025-10-14 13:32:55.336870625 +0000 UTC m=+1092.185305434" Oct 14 13:32:55 crc kubenswrapper[4725]: I1014 13:32:55.482053 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 14 13:32:55 crc kubenswrapper[4725]: I1014 13:32:55.482186 4725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 13:32:55 crc kubenswrapper[4725]: I1014 13:32:55.484467 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 14 13:32:55 crc kubenswrapper[4725]: I1014 13:32:55.592156 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 14 13:32:57 crc kubenswrapper[4725]: I1014 13:32:57.183003 4725 generic.go:334] "Generic (PLEG): container finished" podID="1f19eb4e-4359-4092-a050-e1d695fbb891" containerID="f3ae748cd67304102fa70e79a35a19fab53a8e568b01e0cba3808f45c1d7d2d7" exitCode=0 Oct 14 13:32:57 crc kubenswrapper[4725]: I1014 13:32:57.183311 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rwcgw" event={"ID":"1f19eb4e-4359-4092-a050-e1d695fbb891","Type":"ContainerDied","Data":"f3ae748cd67304102fa70e79a35a19fab53a8e568b01e0cba3808f45c1d7d2d7"} Oct 14 13:32:59 crc kubenswrapper[4725]: I1014 13:32:59.208687 4725 generic.go:334] "Generic (PLEG): container finished" podID="ec415043-bd33-4ab3-8437-28eda0458656" containerID="e0dcf1caeb71663ee972c6f823679124c497c498e4d1b99091b1fb84b43bfee0" exitCode=0 Oct 14 13:32:59 crc kubenswrapper[4725]: I1014 13:32:59.208738 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-gzbkx" event={"ID":"ec415043-bd33-4ab3-8437-28eda0458656","Type":"ContainerDied","Data":"e0dcf1caeb71663ee972c6f823679124c497c498e4d1b99091b1fb84b43bfee0"} Oct 14 13:32:59 crc kubenswrapper[4725]: I1014 13:32:59.409861 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rwcgw" Oct 14 13:32:59 crc kubenswrapper[4725]: I1014 13:32:59.439329 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbkkb\" (UniqueName: \"kubernetes.io/projected/1f19eb4e-4359-4092-a050-e1d695fbb891-kube-api-access-xbkkb\") pod \"1f19eb4e-4359-4092-a050-e1d695fbb891\" (UID: \"1f19eb4e-4359-4092-a050-e1d695fbb891\") " Oct 14 13:32:59 crc kubenswrapper[4725]: I1014 13:32:59.439577 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f19eb4e-4359-4092-a050-e1d695fbb891-db-sync-config-data\") pod \"1f19eb4e-4359-4092-a050-e1d695fbb891\" (UID: \"1f19eb4e-4359-4092-a050-e1d695fbb891\") " Oct 14 13:32:59 crc kubenswrapper[4725]: I1014 13:32:59.439685 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f19eb4e-4359-4092-a050-e1d695fbb891-combined-ca-bundle\") pod \"1f19eb4e-4359-4092-a050-e1d695fbb891\" (UID: \"1f19eb4e-4359-4092-a050-e1d695fbb891\") " Oct 14 13:32:59 crc kubenswrapper[4725]: I1014 13:32:59.456821 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f19eb4e-4359-4092-a050-e1d695fbb891-kube-api-access-xbkkb" (OuterVolumeSpecName: "kube-api-access-xbkkb") pod "1f19eb4e-4359-4092-a050-e1d695fbb891" (UID: "1f19eb4e-4359-4092-a050-e1d695fbb891"). InnerVolumeSpecName "kube-api-access-xbkkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:32:59 crc kubenswrapper[4725]: I1014 13:32:59.459790 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f19eb4e-4359-4092-a050-e1d695fbb891-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1f19eb4e-4359-4092-a050-e1d695fbb891" (UID: "1f19eb4e-4359-4092-a050-e1d695fbb891"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:59 crc kubenswrapper[4725]: I1014 13:32:59.480561 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f19eb4e-4359-4092-a050-e1d695fbb891-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f19eb4e-4359-4092-a050-e1d695fbb891" (UID: "1f19eb4e-4359-4092-a050-e1d695fbb891"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:32:59 crc kubenswrapper[4725]: I1014 13:32:59.542462 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f19eb4e-4359-4092-a050-e1d695fbb891-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:59 crc kubenswrapper[4725]: I1014 13:32:59.542504 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbkkb\" (UniqueName: \"kubernetes.io/projected/1f19eb4e-4359-4092-a050-e1d695fbb891-kube-api-access-xbkkb\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:59 crc kubenswrapper[4725]: I1014 13:32:59.542516 4725 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f19eb4e-4359-4092-a050-e1d695fbb891-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:32:59 crc kubenswrapper[4725]: I1014 13:32:59.716171 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-9d5c84b44-vssnm" podUID="551500de-77a9-4f28-ab4b-b8259e04804b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.043211 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7cdf854644-xbv6p" podUID="0f50192b-c5ae-418d-9d3a-a670d49f8ded" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.219589 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rwcgw" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.219613 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rwcgw" event={"ID":"1f19eb4e-4359-4092-a050-e1d695fbb891","Type":"ContainerDied","Data":"7d5530e131f29f44828af0626d0373e717bf75c34fb9b2763a11d8faef9ac46d"} Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.220139 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d5530e131f29f44828af0626d0373e717bf75c34fb9b2763a11d8faef9ac46d" Oct 14 13:33:00 crc kubenswrapper[4725]: E1014 13:33:00.256803 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="4bd1dc15-7e73-480b-9599-123a18602d5e" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.565989 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-gzbkx" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.672747 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec415043-bd33-4ab3-8437-28eda0458656-scripts\") pod \"ec415043-bd33-4ab3-8437-28eda0458656\" (UID: \"ec415043-bd33-4ab3-8437-28eda0458656\") " Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.672815 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ec415043-bd33-4ab3-8437-28eda0458656-db-sync-config-data\") pod \"ec415043-bd33-4ab3-8437-28eda0458656\" (UID: \"ec415043-bd33-4ab3-8437-28eda0458656\") " Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.672958 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec415043-bd33-4ab3-8437-28eda0458656-config-data\") pod \"ec415043-bd33-4ab3-8437-28eda0458656\" (UID: \"ec415043-bd33-4ab3-8437-28eda0458656\") " Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.672989 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec415043-bd33-4ab3-8437-28eda0458656-combined-ca-bundle\") pod \"ec415043-bd33-4ab3-8437-28eda0458656\" (UID: \"ec415043-bd33-4ab3-8437-28eda0458656\") " Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.673028 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec415043-bd33-4ab3-8437-28eda0458656-etc-machine-id\") pod \"ec415043-bd33-4ab3-8437-28eda0458656\" (UID: \"ec415043-bd33-4ab3-8437-28eda0458656\") " Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.673082 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhptj\" (UniqueName: \"kubernetes.io/projected/ec415043-bd33-4ab3-8437-28eda0458656-kube-api-access-lhptj\") pod \"ec415043-bd33-4ab3-8437-28eda0458656\" (UID: \"ec415043-bd33-4ab3-8437-28eda0458656\") " Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.677714 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec415043-bd33-4ab3-8437-28eda0458656-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ec415043-bd33-4ab3-8437-28eda0458656" (UID: "ec415043-bd33-4ab3-8437-28eda0458656"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.684615 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec415043-bd33-4ab3-8437-28eda0458656-scripts" (OuterVolumeSpecName: "scripts") pod "ec415043-bd33-4ab3-8437-28eda0458656" (UID: "ec415043-bd33-4ab3-8437-28eda0458656"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.704624 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec415043-bd33-4ab3-8437-28eda0458656-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ec415043-bd33-4ab3-8437-28eda0458656" (UID: "ec415043-bd33-4ab3-8437-28eda0458656"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.704718 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec415043-bd33-4ab3-8437-28eda0458656-kube-api-access-lhptj" (OuterVolumeSpecName: "kube-api-access-lhptj") pod "ec415043-bd33-4ab3-8437-28eda0458656" (UID: "ec415043-bd33-4ab3-8437-28eda0458656"). InnerVolumeSpecName "kube-api-access-lhptj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.705503 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-65974cff8b-hfxck"] Oct 14 13:33:00 crc kubenswrapper[4725]: E1014 13:33:00.706070 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec415043-bd33-4ab3-8437-28eda0458656" containerName="cinder-db-sync" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.706094 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec415043-bd33-4ab3-8437-28eda0458656" containerName="cinder-db-sync" Oct 14 13:33:00 crc kubenswrapper[4725]: E1014 13:33:00.706121 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3" containerName="dnsmasq-dns" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.706131 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3" containerName="dnsmasq-dns" Oct 14 13:33:00 crc kubenswrapper[4725]: E1014 13:33:00.706144 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3" containerName="init" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.706151 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3" containerName="init" Oct 14 13:33:00 crc kubenswrapper[4725]: E1014 13:33:00.706170 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f19eb4e-4359-4092-a050-e1d695fbb891" containerName="barbican-db-sync" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.706177 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f19eb4e-4359-4092-a050-e1d695fbb891" containerName="barbican-db-sync" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.706397 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec415043-bd33-4ab3-8437-28eda0458656" containerName="cinder-db-sync" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.706422 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="59c9a0b8-b2d8-4f50-9b54-e69624ebbfd3" containerName="dnsmasq-dns" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.706435 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f19eb4e-4359-4092-a050-e1d695fbb891" containerName="barbican-db-sync" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.707624 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-65974cff8b-hfxck" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.715383 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-d7s2t" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.715823 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.716701 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.747643 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec415043-bd33-4ab3-8437-28eda0458656-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec415043-bd33-4ab3-8437-28eda0458656" (UID: "ec415043-bd33-4ab3-8437-28eda0458656"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.751055 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-65974cff8b-hfxck"] Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.775091 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6cdc779-b232-4adb-9a2e-1605f2ebadbf-combined-ca-bundle\") pod \"barbican-keystone-listener-65974cff8b-hfxck\" (UID: \"f6cdc779-b232-4adb-9a2e-1605f2ebadbf\") " pod="openstack/barbican-keystone-listener-65974cff8b-hfxck" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.775123 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6cdc779-b232-4adb-9a2e-1605f2ebadbf-config-data\") pod \"barbican-keystone-listener-65974cff8b-hfxck\" (UID: \"f6cdc779-b232-4adb-9a2e-1605f2ebadbf\") " pod="openstack/barbican-keystone-listener-65974cff8b-hfxck" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.775141 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6cdc779-b232-4adb-9a2e-1605f2ebadbf-config-data-custom\") pod \"barbican-keystone-listener-65974cff8b-hfxck\" (UID: \"f6cdc779-b232-4adb-9a2e-1605f2ebadbf\") " pod="openstack/barbican-keystone-listener-65974cff8b-hfxck" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.775201 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rnn4\" (UniqueName: \"kubernetes.io/projected/f6cdc779-b232-4adb-9a2e-1605f2ebadbf-kube-api-access-4rnn4\") pod \"barbican-keystone-listener-65974cff8b-hfxck\" (UID: \"f6cdc779-b232-4adb-9a2e-1605f2ebadbf\") " pod="openstack/barbican-keystone-listener-65974cff8b-hfxck" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.775256 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6cdc779-b232-4adb-9a2e-1605f2ebadbf-logs\") pod \"barbican-keystone-listener-65974cff8b-hfxck\" (UID: \"f6cdc779-b232-4adb-9a2e-1605f2ebadbf\") " pod="openstack/barbican-keystone-listener-65974cff8b-hfxck" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.775335 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec415043-bd33-4ab3-8437-28eda0458656-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.775345 4725 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ec415043-bd33-4ab3-8437-28eda0458656-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.775355 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec415043-bd33-4ab3-8437-28eda0458656-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.775365 4725 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec415043-bd33-4ab3-8437-28eda0458656-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.775373 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhptj\" (UniqueName: \"kubernetes.io/projected/ec415043-bd33-4ab3-8437-28eda0458656-kube-api-access-lhptj\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.802544 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6bf494f4f9-qtck9"] Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.804020 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6bf494f4f9-qtck9" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.819160 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.832953 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec415043-bd33-4ab3-8437-28eda0458656-config-data" (OuterVolumeSpecName: "config-data") pod "ec415043-bd33-4ab3-8437-28eda0458656" (UID: "ec415043-bd33-4ab3-8437-28eda0458656"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.833747 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6bf494f4f9-qtck9"] Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.849375 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-qrrtq"] Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.850994 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-qrrtq" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.861993 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-qrrtq"] Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.876687 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21e22c94-f7a6-4caa-8db5-29805e19bdfe-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-qrrtq\" (UID: \"21e22c94-f7a6-4caa-8db5-29805e19bdfe\") " pod="openstack/dnsmasq-dns-75c8ddd69c-qrrtq" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.876728 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rnn4\" (UniqueName: \"kubernetes.io/projected/f6cdc779-b232-4adb-9a2e-1605f2ebadbf-kube-api-access-4rnn4\") pod \"barbican-keystone-listener-65974cff8b-hfxck\" (UID: \"f6cdc779-b232-4adb-9a2e-1605f2ebadbf\") " pod="openstack/barbican-keystone-listener-65974cff8b-hfxck" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.876763 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af54da1d-ac00-444e-b85b-6f4a5d286dc6-config-data\") pod \"barbican-worker-6bf494f4f9-qtck9\" (UID: \"af54da1d-ac00-444e-b85b-6f4a5d286dc6\") " pod="openstack/barbican-worker-6bf494f4f9-qtck9" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.876799 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21e22c94-f7a6-4caa-8db5-29805e19bdfe-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-qrrtq\" (UID: \"21e22c94-f7a6-4caa-8db5-29805e19bdfe\") " pod="openstack/dnsmasq-dns-75c8ddd69c-qrrtq" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.876829 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21e22c94-f7a6-4caa-8db5-29805e19bdfe-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-qrrtq\" (UID: \"21e22c94-f7a6-4caa-8db5-29805e19bdfe\") " pod="openstack/dnsmasq-dns-75c8ddd69c-qrrtq" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.876847 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af54da1d-ac00-444e-b85b-6f4a5d286dc6-config-data-custom\") pod \"barbican-worker-6bf494f4f9-qtck9\" (UID: \"af54da1d-ac00-444e-b85b-6f4a5d286dc6\") " pod="openstack/barbican-worker-6bf494f4f9-qtck9" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.876867 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6cdc779-b232-4adb-9a2e-1605f2ebadbf-logs\") pod \"barbican-keystone-listener-65974cff8b-hfxck\" (UID: \"f6cdc779-b232-4adb-9a2e-1605f2ebadbf\") " pod="openstack/barbican-keystone-listener-65974cff8b-hfxck" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.876886 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qltq5\" (UniqueName: \"kubernetes.io/projected/af54da1d-ac00-444e-b85b-6f4a5d286dc6-kube-api-access-qltq5\") pod \"barbican-worker-6bf494f4f9-qtck9\" (UID: \"af54da1d-ac00-444e-b85b-6f4a5d286dc6\") " pod="openstack/barbican-worker-6bf494f4f9-qtck9" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.876910 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af54da1d-ac00-444e-b85b-6f4a5d286dc6-combined-ca-bundle\") pod \"barbican-worker-6bf494f4f9-qtck9\" (UID: \"af54da1d-ac00-444e-b85b-6f4a5d286dc6\") " pod="openstack/barbican-worker-6bf494f4f9-qtck9" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.876937 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21e22c94-f7a6-4caa-8db5-29805e19bdfe-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-qrrtq\" (UID: \"21e22c94-f7a6-4caa-8db5-29805e19bdfe\") " pod="openstack/dnsmasq-dns-75c8ddd69c-qrrtq" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.876957 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkz9f\" (UniqueName: \"kubernetes.io/projected/21e22c94-f7a6-4caa-8db5-29805e19bdfe-kube-api-access-bkz9f\") pod \"dnsmasq-dns-75c8ddd69c-qrrtq\" (UID: \"21e22c94-f7a6-4caa-8db5-29805e19bdfe\") " pod="openstack/dnsmasq-dns-75c8ddd69c-qrrtq" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.877002 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6cdc779-b232-4adb-9a2e-1605f2ebadbf-combined-ca-bundle\") pod \"barbican-keystone-listener-65974cff8b-hfxck\" (UID: \"f6cdc779-b232-4adb-9a2e-1605f2ebadbf\") " pod="openstack/barbican-keystone-listener-65974cff8b-hfxck" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.877019 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6cdc779-b232-4adb-9a2e-1605f2ebadbf-config-data\") pod \"barbican-keystone-listener-65974cff8b-hfxck\" (UID: \"f6cdc779-b232-4adb-9a2e-1605f2ebadbf\") " pod="openstack/barbican-keystone-listener-65974cff8b-hfxck" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.877045 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6cdc779-b232-4adb-9a2e-1605f2ebadbf-config-data-custom\") pod \"barbican-keystone-listener-65974cff8b-hfxck\" (UID: \"f6cdc779-b232-4adb-9a2e-1605f2ebadbf\") " pod="openstack/barbican-keystone-listener-65974cff8b-hfxck" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.877081 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af54da1d-ac00-444e-b85b-6f4a5d286dc6-logs\") pod \"barbican-worker-6bf494f4f9-qtck9\" (UID: \"af54da1d-ac00-444e-b85b-6f4a5d286dc6\") " pod="openstack/barbican-worker-6bf494f4f9-qtck9" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.877100 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21e22c94-f7a6-4caa-8db5-29805e19bdfe-config\") pod \"dnsmasq-dns-75c8ddd69c-qrrtq\" (UID: \"21e22c94-f7a6-4caa-8db5-29805e19bdfe\") " pod="openstack/dnsmasq-dns-75c8ddd69c-qrrtq" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.877151 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec415043-bd33-4ab3-8437-28eda0458656-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.878920 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6cdc779-b232-4adb-9a2e-1605f2ebadbf-logs\") pod \"barbican-keystone-listener-65974cff8b-hfxck\" (UID: \"f6cdc779-b232-4adb-9a2e-1605f2ebadbf\") " pod="openstack/barbican-keystone-listener-65974cff8b-hfxck" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.883923 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6cdc779-b232-4adb-9a2e-1605f2ebadbf-combined-ca-bundle\") pod \"barbican-keystone-listener-65974cff8b-hfxck\" (UID: \"f6cdc779-b232-4adb-9a2e-1605f2ebadbf\") " pod="openstack/barbican-keystone-listener-65974cff8b-hfxck" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.887790 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6cdc779-b232-4adb-9a2e-1605f2ebadbf-config-data-custom\") pod \"barbican-keystone-listener-65974cff8b-hfxck\" (UID: \"f6cdc779-b232-4adb-9a2e-1605f2ebadbf\") " pod="openstack/barbican-keystone-listener-65974cff8b-hfxck" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.892408 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6cdc779-b232-4adb-9a2e-1605f2ebadbf-config-data\") pod \"barbican-keystone-listener-65974cff8b-hfxck\" (UID: \"f6cdc779-b232-4adb-9a2e-1605f2ebadbf\") " pod="openstack/barbican-keystone-listener-65974cff8b-hfxck" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.900623 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-55fd7554f8-x8glb"] Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.902118 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55fd7554f8-x8glb" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.905539 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rnn4\" (UniqueName: \"kubernetes.io/projected/f6cdc779-b232-4adb-9a2e-1605f2ebadbf-kube-api-access-4rnn4\") pod \"barbican-keystone-listener-65974cff8b-hfxck\" (UID: \"f6cdc779-b232-4adb-9a2e-1605f2ebadbf\") " pod="openstack/barbican-keystone-listener-65974cff8b-hfxck" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.913696 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.928474 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-55fd7554f8-x8glb"] Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.978465 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af54da1d-ac00-444e-b85b-6f4a5d286dc6-logs\") pod \"barbican-worker-6bf494f4f9-qtck9\" (UID: \"af54da1d-ac00-444e-b85b-6f4a5d286dc6\") " pod="openstack/barbican-worker-6bf494f4f9-qtck9" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.978822 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21e22c94-f7a6-4caa-8db5-29805e19bdfe-config\") pod \"dnsmasq-dns-75c8ddd69c-qrrtq\" (UID: \"21e22c94-f7a6-4caa-8db5-29805e19bdfe\") " pod="openstack/dnsmasq-dns-75c8ddd69c-qrrtq" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.978924 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21e22c94-f7a6-4caa-8db5-29805e19bdfe-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-qrrtq\" (UID: \"21e22c94-f7a6-4caa-8db5-29805e19bdfe\") " pod="openstack/dnsmasq-dns-75c8ddd69c-qrrtq" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.979012 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af54da1d-ac00-444e-b85b-6f4a5d286dc6-config-data\") pod \"barbican-worker-6bf494f4f9-qtck9\" (UID: \"af54da1d-ac00-444e-b85b-6f4a5d286dc6\") " pod="openstack/barbican-worker-6bf494f4f9-qtck9" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.979128 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21e22c94-f7a6-4caa-8db5-29805e19bdfe-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-qrrtq\" (UID: \"21e22c94-f7a6-4caa-8db5-29805e19bdfe\") " pod="openstack/dnsmasq-dns-75c8ddd69c-qrrtq" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.979247 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk6wf\" (UniqueName: \"kubernetes.io/projected/f3bff483-b10b-41fc-bf73-a04a44c16a23-kube-api-access-zk6wf\") pod \"barbican-api-55fd7554f8-x8glb\" (UID: \"f3bff483-b10b-41fc-bf73-a04a44c16a23\") " pod="openstack/barbican-api-55fd7554f8-x8glb" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.979353 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21e22c94-f7a6-4caa-8db5-29805e19bdfe-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-qrrtq\" (UID: \"21e22c94-f7a6-4caa-8db5-29805e19bdfe\") " pod="openstack/dnsmasq-dns-75c8ddd69c-qrrtq" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.979442 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af54da1d-ac00-444e-b85b-6f4a5d286dc6-config-data-custom\") pod \"barbican-worker-6bf494f4f9-qtck9\" (UID: \"af54da1d-ac00-444e-b85b-6f4a5d286dc6\") " pod="openstack/barbican-worker-6bf494f4f9-qtck9" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.979582 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3bff483-b10b-41fc-bf73-a04a44c16a23-logs\") pod \"barbican-api-55fd7554f8-x8glb\" (UID: \"f3bff483-b10b-41fc-bf73-a04a44c16a23\") " pod="openstack/barbican-api-55fd7554f8-x8glb" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.979675 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3bff483-b10b-41fc-bf73-a04a44c16a23-config-data\") pod \"barbican-api-55fd7554f8-x8glb\" (UID: \"f3bff483-b10b-41fc-bf73-a04a44c16a23\") " pod="openstack/barbican-api-55fd7554f8-x8glb" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.979797 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3bff483-b10b-41fc-bf73-a04a44c16a23-config-data-custom\") pod \"barbican-api-55fd7554f8-x8glb\" (UID: \"f3bff483-b10b-41fc-bf73-a04a44c16a23\") " pod="openstack/barbican-api-55fd7554f8-x8glb" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.979897 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qltq5\" (UniqueName: \"kubernetes.io/projected/af54da1d-ac00-444e-b85b-6f4a5d286dc6-kube-api-access-qltq5\") pod \"barbican-worker-6bf494f4f9-qtck9\" (UID: \"af54da1d-ac00-444e-b85b-6f4a5d286dc6\") " pod="openstack/barbican-worker-6bf494f4f9-qtck9" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.980023 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af54da1d-ac00-444e-b85b-6f4a5d286dc6-combined-ca-bundle\") pod \"barbican-worker-6bf494f4f9-qtck9\" (UID: \"af54da1d-ac00-444e-b85b-6f4a5d286dc6\") " pod="openstack/barbican-worker-6bf494f4f9-qtck9" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.980149 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21e22c94-f7a6-4caa-8db5-29805e19bdfe-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-qrrtq\" (UID: \"21e22c94-f7a6-4caa-8db5-29805e19bdfe\") " pod="openstack/dnsmasq-dns-75c8ddd69c-qrrtq" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.980256 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkz9f\" (UniqueName: \"kubernetes.io/projected/21e22c94-f7a6-4caa-8db5-29805e19bdfe-kube-api-access-bkz9f\") pod \"dnsmasq-dns-75c8ddd69c-qrrtq\" (UID: \"21e22c94-f7a6-4caa-8db5-29805e19bdfe\") " pod="openstack/dnsmasq-dns-75c8ddd69c-qrrtq" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.980514 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3bff483-b10b-41fc-bf73-a04a44c16a23-combined-ca-bundle\") pod \"barbican-api-55fd7554f8-x8glb\" (UID: \"f3bff483-b10b-41fc-bf73-a04a44c16a23\") " pod="openstack/barbican-api-55fd7554f8-x8glb" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.981738 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21e22c94-f7a6-4caa-8db5-29805e19bdfe-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-qrrtq\" (UID: \"21e22c94-f7a6-4caa-8db5-29805e19bdfe\") " pod="openstack/dnsmasq-dns-75c8ddd69c-qrrtq" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.985836 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21e22c94-f7a6-4caa-8db5-29805e19bdfe-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-qrrtq\" (UID: \"21e22c94-f7a6-4caa-8db5-29805e19bdfe\") " pod="openstack/dnsmasq-dns-75c8ddd69c-qrrtq" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.986577 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21e22c94-f7a6-4caa-8db5-29805e19bdfe-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-qrrtq\" (UID: \"21e22c94-f7a6-4caa-8db5-29805e19bdfe\") " pod="openstack/dnsmasq-dns-75c8ddd69c-qrrtq" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.989228 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af54da1d-ac00-444e-b85b-6f4a5d286dc6-logs\") pod \"barbican-worker-6bf494f4f9-qtck9\" (UID: \"af54da1d-ac00-444e-b85b-6f4a5d286dc6\") " pod="openstack/barbican-worker-6bf494f4f9-qtck9" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.989508 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21e22c94-f7a6-4caa-8db5-29805e19bdfe-config\") pod \"dnsmasq-dns-75c8ddd69c-qrrtq\" (UID: \"21e22c94-f7a6-4caa-8db5-29805e19bdfe\") " pod="openstack/dnsmasq-dns-75c8ddd69c-qrrtq" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.991033 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21e22c94-f7a6-4caa-8db5-29805e19bdfe-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-qrrtq\" (UID: \"21e22c94-f7a6-4caa-8db5-29805e19bdfe\") " pod="openstack/dnsmasq-dns-75c8ddd69c-qrrtq" Oct 14 13:33:00 crc kubenswrapper[4725]: I1014 13:33:00.996554 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af54da1d-ac00-444e-b85b-6f4a5d286dc6-config-data\") pod \"barbican-worker-6bf494f4f9-qtck9\" (UID: \"af54da1d-ac00-444e-b85b-6f4a5d286dc6\") " pod="openstack/barbican-worker-6bf494f4f9-qtck9" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.000033 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af54da1d-ac00-444e-b85b-6f4a5d286dc6-config-data-custom\") pod \"barbican-worker-6bf494f4f9-qtck9\" (UID: \"af54da1d-ac00-444e-b85b-6f4a5d286dc6\") " pod="openstack/barbican-worker-6bf494f4f9-qtck9" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.001352 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af54da1d-ac00-444e-b85b-6f4a5d286dc6-combined-ca-bundle\") pod \"barbican-worker-6bf494f4f9-qtck9\" (UID: \"af54da1d-ac00-444e-b85b-6f4a5d286dc6\") " pod="openstack/barbican-worker-6bf494f4f9-qtck9" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.022202 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkz9f\" (UniqueName: \"kubernetes.io/projected/21e22c94-f7a6-4caa-8db5-29805e19bdfe-kube-api-access-bkz9f\") pod \"dnsmasq-dns-75c8ddd69c-qrrtq\" (UID: \"21e22c94-f7a6-4caa-8db5-29805e19bdfe\") " pod="openstack/dnsmasq-dns-75c8ddd69c-qrrtq" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.034000 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qltq5\" (UniqueName: \"kubernetes.io/projected/af54da1d-ac00-444e-b85b-6f4a5d286dc6-kube-api-access-qltq5\") pod \"barbican-worker-6bf494f4f9-qtck9\" (UID: \"af54da1d-ac00-444e-b85b-6f4a5d286dc6\") " pod="openstack/barbican-worker-6bf494f4f9-qtck9" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.083238 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk6wf\" (UniqueName: \"kubernetes.io/projected/f3bff483-b10b-41fc-bf73-a04a44c16a23-kube-api-access-zk6wf\") pod \"barbican-api-55fd7554f8-x8glb\" (UID: \"f3bff483-b10b-41fc-bf73-a04a44c16a23\") " pod="openstack/barbican-api-55fd7554f8-x8glb" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.083294 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3bff483-b10b-41fc-bf73-a04a44c16a23-logs\") pod \"barbican-api-55fd7554f8-x8glb\" (UID: \"f3bff483-b10b-41fc-bf73-a04a44c16a23\") " pod="openstack/barbican-api-55fd7554f8-x8glb" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.083314 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3bff483-b10b-41fc-bf73-a04a44c16a23-config-data\") pod \"barbican-api-55fd7554f8-x8glb\" (UID: \"f3bff483-b10b-41fc-bf73-a04a44c16a23\") " pod="openstack/barbican-api-55fd7554f8-x8glb" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.083335 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3bff483-b10b-41fc-bf73-a04a44c16a23-config-data-custom\") pod \"barbican-api-55fd7554f8-x8glb\" (UID: \"f3bff483-b10b-41fc-bf73-a04a44c16a23\") " pod="openstack/barbican-api-55fd7554f8-x8glb" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.083404 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3bff483-b10b-41fc-bf73-a04a44c16a23-combined-ca-bundle\") pod \"barbican-api-55fd7554f8-x8glb\" (UID: \"f3bff483-b10b-41fc-bf73-a04a44c16a23\") " pod="openstack/barbican-api-55fd7554f8-x8glb" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.084325 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3bff483-b10b-41fc-bf73-a04a44c16a23-logs\") pod \"barbican-api-55fd7554f8-x8glb\" (UID: \"f3bff483-b10b-41fc-bf73-a04a44c16a23\") " pod="openstack/barbican-api-55fd7554f8-x8glb" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.095844 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-65974cff8b-hfxck" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.096086 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3bff483-b10b-41fc-bf73-a04a44c16a23-config-data-custom\") pod \"barbican-api-55fd7554f8-x8glb\" (UID: \"f3bff483-b10b-41fc-bf73-a04a44c16a23\") " pod="openstack/barbican-api-55fd7554f8-x8glb" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.098739 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3bff483-b10b-41fc-bf73-a04a44c16a23-config-data\") pod \"barbican-api-55fd7554f8-x8glb\" (UID: \"f3bff483-b10b-41fc-bf73-a04a44c16a23\") " pod="openstack/barbican-api-55fd7554f8-x8glb" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.105104 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3bff483-b10b-41fc-bf73-a04a44c16a23-combined-ca-bundle\") pod \"barbican-api-55fd7554f8-x8glb\" (UID: \"f3bff483-b10b-41fc-bf73-a04a44c16a23\") " pod="openstack/barbican-api-55fd7554f8-x8glb" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.119937 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk6wf\" (UniqueName: \"kubernetes.io/projected/f3bff483-b10b-41fc-bf73-a04a44c16a23-kube-api-access-zk6wf\") pod \"barbican-api-55fd7554f8-x8glb\" (UID: \"f3bff483-b10b-41fc-bf73-a04a44c16a23\") " pod="openstack/barbican-api-55fd7554f8-x8glb" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.157914 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6bf494f4f9-qtck9" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.182187 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-qrrtq" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.230273 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55fd7554f8-x8glb" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.275123 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-gzbkx" event={"ID":"ec415043-bd33-4ab3-8437-28eda0458656","Type":"ContainerDied","Data":"46ced52c5ea988369e91648d6aa9254aca236f85f7464b50f7141e7c5eab22ed"} Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.275175 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46ced52c5ea988369e91648d6aa9254aca236f85f7464b50f7141e7c5eab22ed" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.275628 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-gzbkx" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.305903 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4bd1dc15-7e73-480b-9599-123a18602d5e" containerName="ceilometer-notification-agent" containerID="cri-o://15f5abc47c564299170d4a6c8faefc9f3b7a4850aee85003eaf162275aceeec0" gracePeriod=30 Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.305701 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bd1dc15-7e73-480b-9599-123a18602d5e","Type":"ContainerStarted","Data":"f17aa73e95cd39969a6cca6fac5506dbabe8bfa93c68346382996db209218212"} Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.306606 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4bd1dc15-7e73-480b-9599-123a18602d5e" containerName="sg-core" containerID="cri-o://055a65bd493fbbae350c73b4f7f66116107548e7b4f0511a73cfac32e1a9bc58" gracePeriod=30 Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.306603 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4bd1dc15-7e73-480b-9599-123a18602d5e" containerName="proxy-httpd" containerID="cri-o://f17aa73e95cd39969a6cca6fac5506dbabe8bfa93c68346382996db209218212" gracePeriod=30 Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.308007 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.533446 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.536342 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.539834 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-62bg8" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.540110 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.540210 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.540320 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.551501 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.673839 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-qrrtq"] Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.707708 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-9tc2g"] Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.709383 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-9tc2g" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.716892 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-65974cff8b-hfxck"] Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.724723 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-9tc2g"] Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.726731 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.726777 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.726813 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-scripts\") pod \"cinder-scheduler-0\" (UID: \"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.726836 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.726852 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-config-data\") pod \"cinder-scheduler-0\" (UID: \"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.727241 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjsqx\" (UniqueName: \"kubernetes.io/projected/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-kube-api-access-vjsqx\") pod \"cinder-scheduler-0\" (UID: \"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.782298 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.784285 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.786779 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.797116 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.828402 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.828482 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-config-data\") pod \"cinder-scheduler-0\" (UID: \"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.828548 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/063acef7-e054-4cf9-80ed-52c7a384754d-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-9tc2g\" (UID: \"063acef7-e054-4cf9-80ed-52c7a384754d\") " pod="openstack/dnsmasq-dns-5784cf869f-9tc2g" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.828579 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/063acef7-e054-4cf9-80ed-52c7a384754d-config\") pod \"dnsmasq-dns-5784cf869f-9tc2g\" (UID: \"063acef7-e054-4cf9-80ed-52c7a384754d\") " pod="openstack/dnsmasq-dns-5784cf869f-9tc2g" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.828621 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjsqx\" (UniqueName: \"kubernetes.io/projected/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-kube-api-access-vjsqx\") pod \"cinder-scheduler-0\" (UID: \"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.828640 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/063acef7-e054-4cf9-80ed-52c7a384754d-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-9tc2g\" (UID: \"063acef7-e054-4cf9-80ed-52c7a384754d\") " pod="openstack/dnsmasq-dns-5784cf869f-9tc2g" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.828673 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.828695 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.828723 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/063acef7-e054-4cf9-80ed-52c7a384754d-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-9tc2g\" (UID: \"063acef7-e054-4cf9-80ed-52c7a384754d\") " pod="openstack/dnsmasq-dns-5784cf869f-9tc2g" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.828747 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/063acef7-e054-4cf9-80ed-52c7a384754d-dns-svc\") pod \"dnsmasq-dns-5784cf869f-9tc2g\" (UID: \"063acef7-e054-4cf9-80ed-52c7a384754d\") " pod="openstack/dnsmasq-dns-5784cf869f-9tc2g" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.828763 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-scripts\") pod \"cinder-scheduler-0\" (UID: \"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.828783 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw2jn\" (UniqueName: \"kubernetes.io/projected/063acef7-e054-4cf9-80ed-52c7a384754d-kube-api-access-vw2jn\") pod \"dnsmasq-dns-5784cf869f-9tc2g\" (UID: \"063acef7-e054-4cf9-80ed-52c7a384754d\") " pod="openstack/dnsmasq-dns-5784cf869f-9tc2g" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.829601 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.834707 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-scripts\") pod \"cinder-scheduler-0\" (UID: \"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.837335 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.838246 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-config-data\") pod \"cinder-scheduler-0\" (UID: \"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.845713 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.847789 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjsqx\" (UniqueName: \"kubernetes.io/projected/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-kube-api-access-vjsqx\") pod \"cinder-scheduler-0\" (UID: \"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.903252 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.934736 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0610536d-5467-4887-85ce-c4a419d93a95-scripts\") pod \"cinder-api-0\" (UID: \"0610536d-5467-4887-85ce-c4a419d93a95\") " pod="openstack/cinder-api-0" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.935192 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l7dv\" (UniqueName: \"kubernetes.io/projected/0610536d-5467-4887-85ce-c4a419d93a95-kube-api-access-9l7dv\") pod \"cinder-api-0\" (UID: \"0610536d-5467-4887-85ce-c4a419d93a95\") " pod="openstack/cinder-api-0" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.935310 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/063acef7-e054-4cf9-80ed-52c7a384754d-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-9tc2g\" (UID: \"063acef7-e054-4cf9-80ed-52c7a384754d\") " pod="openstack/dnsmasq-dns-5784cf869f-9tc2g" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.935348 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0610536d-5467-4887-85ce-c4a419d93a95-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0610536d-5467-4887-85ce-c4a419d93a95\") " pod="openstack/cinder-api-0" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.935401 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/063acef7-e054-4cf9-80ed-52c7a384754d-dns-svc\") pod \"dnsmasq-dns-5784cf869f-9tc2g\" (UID: \"063acef7-e054-4cf9-80ed-52c7a384754d\") " pod="openstack/dnsmasq-dns-5784cf869f-9tc2g" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.935510 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw2jn\" (UniqueName: \"kubernetes.io/projected/063acef7-e054-4cf9-80ed-52c7a384754d-kube-api-access-vw2jn\") pod \"dnsmasq-dns-5784cf869f-9tc2g\" (UID: \"063acef7-e054-4cf9-80ed-52c7a384754d\") " pod="openstack/dnsmasq-dns-5784cf869f-9tc2g" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.935975 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0610536d-5467-4887-85ce-c4a419d93a95-config-data\") pod \"cinder-api-0\" (UID: \"0610536d-5467-4887-85ce-c4a419d93a95\") " pod="openstack/cinder-api-0" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.936045 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/063acef7-e054-4cf9-80ed-52c7a384754d-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-9tc2g\" (UID: \"063acef7-e054-4cf9-80ed-52c7a384754d\") " pod="openstack/dnsmasq-dns-5784cf869f-9tc2g" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.936135 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/063acef7-e054-4cf9-80ed-52c7a384754d-config\") pod \"dnsmasq-dns-5784cf869f-9tc2g\" (UID: \"063acef7-e054-4cf9-80ed-52c7a384754d\") " pod="openstack/dnsmasq-dns-5784cf869f-9tc2g" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.937130 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0610536d-5467-4887-85ce-c4a419d93a95-config-data-custom\") pod \"cinder-api-0\" (UID: \"0610536d-5467-4887-85ce-c4a419d93a95\") " pod="openstack/cinder-api-0" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.937213 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/063acef7-e054-4cf9-80ed-52c7a384754d-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-9tc2g\" (UID: \"063acef7-e054-4cf9-80ed-52c7a384754d\") " pod="openstack/dnsmasq-dns-5784cf869f-9tc2g" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.937239 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0610536d-5467-4887-85ce-c4a419d93a95-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0610536d-5467-4887-85ce-c4a419d93a95\") " pod="openstack/cinder-api-0" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.937291 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0610536d-5467-4887-85ce-c4a419d93a95-logs\") pod \"cinder-api-0\" (UID: \"0610536d-5467-4887-85ce-c4a419d93a95\") " pod="openstack/cinder-api-0" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.936466 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/063acef7-e054-4cf9-80ed-52c7a384754d-dns-svc\") pod \"dnsmasq-dns-5784cf869f-9tc2g\" (UID: \"063acef7-e054-4cf9-80ed-52c7a384754d\") " pod="openstack/dnsmasq-dns-5784cf869f-9tc2g" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.936502 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/063acef7-e054-4cf9-80ed-52c7a384754d-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-9tc2g\" (UID: \"063acef7-e054-4cf9-80ed-52c7a384754d\") " pod="openstack/dnsmasq-dns-5784cf869f-9tc2g" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.937052 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/063acef7-e054-4cf9-80ed-52c7a384754d-config\") pod \"dnsmasq-dns-5784cf869f-9tc2g\" (UID: \"063acef7-e054-4cf9-80ed-52c7a384754d\") " pod="openstack/dnsmasq-dns-5784cf869f-9tc2g" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.937067 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/063acef7-e054-4cf9-80ed-52c7a384754d-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-9tc2g\" (UID: \"063acef7-e054-4cf9-80ed-52c7a384754d\") " pod="openstack/dnsmasq-dns-5784cf869f-9tc2g" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.939390 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/063acef7-e054-4cf9-80ed-52c7a384754d-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-9tc2g\" (UID: \"063acef7-e054-4cf9-80ed-52c7a384754d\") " pod="openstack/dnsmasq-dns-5784cf869f-9tc2g" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.961326 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw2jn\" (UniqueName: \"kubernetes.io/projected/063acef7-e054-4cf9-80ed-52c7a384754d-kube-api-access-vw2jn\") pod \"dnsmasq-dns-5784cf869f-9tc2g\" (UID: \"063acef7-e054-4cf9-80ed-52c7a384754d\") " pod="openstack/dnsmasq-dns-5784cf869f-9tc2g" Oct 14 13:33:01 crc kubenswrapper[4725]: I1014 13:33:01.992232 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6bf494f4f9-qtck9"] Oct 14 13:33:02 crc kubenswrapper[4725]: I1014 13:33:02.040793 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0610536d-5467-4887-85ce-c4a419d93a95-config-data\") pod \"cinder-api-0\" (UID: \"0610536d-5467-4887-85ce-c4a419d93a95\") " pod="openstack/cinder-api-0" Oct 14 13:33:02 crc kubenswrapper[4725]: I1014 13:33:02.042869 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0610536d-5467-4887-85ce-c4a419d93a95-config-data-custom\") pod \"cinder-api-0\" (UID: \"0610536d-5467-4887-85ce-c4a419d93a95\") " pod="openstack/cinder-api-0" Oct 14 13:33:02 crc kubenswrapper[4725]: I1014 13:33:02.042991 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0610536d-5467-4887-85ce-c4a419d93a95-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0610536d-5467-4887-85ce-c4a419d93a95\") " pod="openstack/cinder-api-0" Oct 14 13:33:02 crc kubenswrapper[4725]: I1014 13:33:02.043038 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0610536d-5467-4887-85ce-c4a419d93a95-logs\") pod \"cinder-api-0\" (UID: \"0610536d-5467-4887-85ce-c4a419d93a95\") " pod="openstack/cinder-api-0" Oct 14 13:33:02 crc kubenswrapper[4725]: I1014 13:33:02.043058 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0610536d-5467-4887-85ce-c4a419d93a95-scripts\") pod \"cinder-api-0\" (UID: \"0610536d-5467-4887-85ce-c4a419d93a95\") " pod="openstack/cinder-api-0" Oct 14 13:33:02 crc kubenswrapper[4725]: I1014 13:33:02.043092 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l7dv\" (UniqueName: \"kubernetes.io/projected/0610536d-5467-4887-85ce-c4a419d93a95-kube-api-access-9l7dv\") pod \"cinder-api-0\" (UID: \"0610536d-5467-4887-85ce-c4a419d93a95\") " pod="openstack/cinder-api-0" Oct 14 13:33:02 crc kubenswrapper[4725]: I1014 13:33:02.043132 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0610536d-5467-4887-85ce-c4a419d93a95-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0610536d-5467-4887-85ce-c4a419d93a95\") " pod="openstack/cinder-api-0" Oct 14 13:33:02 crc kubenswrapper[4725]: I1014 13:33:02.043286 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0610536d-5467-4887-85ce-c4a419d93a95-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0610536d-5467-4887-85ce-c4a419d93a95\") " pod="openstack/cinder-api-0" Oct 14 13:33:02 crc kubenswrapper[4725]: I1014 13:33:02.044623 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0610536d-5467-4887-85ce-c4a419d93a95-logs\") pod \"cinder-api-0\" (UID: \"0610536d-5467-4887-85ce-c4a419d93a95\") " pod="openstack/cinder-api-0" Oct 14 13:33:02 crc kubenswrapper[4725]: I1014 13:33:02.048167 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-9tc2g" Oct 14 13:33:02 crc kubenswrapper[4725]: I1014 13:33:02.051164 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0610536d-5467-4887-85ce-c4a419d93a95-scripts\") pod \"cinder-api-0\" (UID: \"0610536d-5467-4887-85ce-c4a419d93a95\") " pod="openstack/cinder-api-0" Oct 14 13:33:02 crc kubenswrapper[4725]: I1014 13:33:02.052434 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0610536d-5467-4887-85ce-c4a419d93a95-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0610536d-5467-4887-85ce-c4a419d93a95\") " pod="openstack/cinder-api-0" Oct 14 13:33:02 crc kubenswrapper[4725]: I1014 13:33:02.052530 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0610536d-5467-4887-85ce-c4a419d93a95-config-data-custom\") pod \"cinder-api-0\" (UID: \"0610536d-5467-4887-85ce-c4a419d93a95\") " pod="openstack/cinder-api-0" Oct 14 13:33:02 crc kubenswrapper[4725]: I1014 13:33:02.052581 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0610536d-5467-4887-85ce-c4a419d93a95-config-data\") pod \"cinder-api-0\" (UID: \"0610536d-5467-4887-85ce-c4a419d93a95\") " pod="openstack/cinder-api-0" Oct 14 13:33:02 crc kubenswrapper[4725]: I1014 13:33:02.072004 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l7dv\" (UniqueName: \"kubernetes.io/projected/0610536d-5467-4887-85ce-c4a419d93a95-kube-api-access-9l7dv\") pod \"cinder-api-0\" (UID: \"0610536d-5467-4887-85ce-c4a419d93a95\") " pod="openstack/cinder-api-0" Oct 14 13:33:02 crc kubenswrapper[4725]: I1014 13:33:02.104397 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 14 13:33:02 crc kubenswrapper[4725]: I1014 13:33:02.121393 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-55fd7554f8-x8glb"] Oct 14 13:33:02 crc kubenswrapper[4725]: I1014 13:33:02.150438 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-qrrtq"] Oct 14 13:33:02 crc kubenswrapper[4725]: I1014 13:33:02.317918 4725 generic.go:334] "Generic (PLEG): container finished" podID="4bd1dc15-7e73-480b-9599-123a18602d5e" containerID="f17aa73e95cd39969a6cca6fac5506dbabe8bfa93c68346382996db209218212" exitCode=0 Oct 14 13:33:02 crc kubenswrapper[4725]: I1014 13:33:02.318157 4725 generic.go:334] "Generic (PLEG): container finished" podID="4bd1dc15-7e73-480b-9599-123a18602d5e" containerID="055a65bd493fbbae350c73b4f7f66116107548e7b4f0511a73cfac32e1a9bc58" exitCode=2 Oct 14 13:33:02 crc kubenswrapper[4725]: I1014 13:33:02.318213 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bd1dc15-7e73-480b-9599-123a18602d5e","Type":"ContainerDied","Data":"f17aa73e95cd39969a6cca6fac5506dbabe8bfa93c68346382996db209218212"} Oct 14 13:33:02 crc kubenswrapper[4725]: I1014 13:33:02.318244 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bd1dc15-7e73-480b-9599-123a18602d5e","Type":"ContainerDied","Data":"055a65bd493fbbae350c73b4f7f66116107548e7b4f0511a73cfac32e1a9bc58"} Oct 14 13:33:02 crc kubenswrapper[4725]: I1014 13:33:02.334690 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6bf494f4f9-qtck9" event={"ID":"af54da1d-ac00-444e-b85b-6f4a5d286dc6","Type":"ContainerStarted","Data":"52730876e3d141ef796752acf011d9f334877274ea050eeae82363183b1efea2"} Oct 14 13:33:02 crc kubenswrapper[4725]: I1014 13:33:02.339545 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65974cff8b-hfxck" event={"ID":"f6cdc779-b232-4adb-9a2e-1605f2ebadbf","Type":"ContainerStarted","Data":"14607d661f10157c10ea24efff048ec23bb49af7045883e63505c420c51b3e90"} Oct 14 13:33:02 crc kubenswrapper[4725]: I1014 13:33:02.345934 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-qrrtq" event={"ID":"21e22c94-f7a6-4caa-8db5-29805e19bdfe","Type":"ContainerStarted","Data":"928aeb53870bd345592be11442cd6427797e807074b8536f11c5ffd6c5b2b177"} Oct 14 13:33:02 crc kubenswrapper[4725]: I1014 13:33:02.348772 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55fd7554f8-x8glb" event={"ID":"f3bff483-b10b-41fc-bf73-a04a44c16a23","Type":"ContainerStarted","Data":"e66d6169e67248f6be5550a0a537e08f2357c7cd3343e60141ecba41fc641f47"} Oct 14 13:33:02 crc kubenswrapper[4725]: I1014 13:33:02.402971 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 13:33:02 crc kubenswrapper[4725]: I1014 13:33:02.596110 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-9tc2g"] Oct 14 13:33:02 crc kubenswrapper[4725]: I1014 13:33:02.692106 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 14 13:33:02 crc kubenswrapper[4725]: W1014 13:33:02.727265 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0610536d_5467_4887_85ce_c4a419d93a95.slice/crio-378f741f53a63e435faefa4bfa35ab17188971794a5e7cc669bc0f88e1804d25 WatchSource:0}: Error finding container 378f741f53a63e435faefa4bfa35ab17188971794a5e7cc669bc0f88e1804d25: Status 404 returned error can't find the container with id 378f741f53a63e435faefa4bfa35ab17188971794a5e7cc669bc0f88e1804d25 Oct 14 13:33:03 crc kubenswrapper[4725]: I1014 13:33:03.360656 4725 generic.go:334] "Generic (PLEG): container finished" podID="21e22c94-f7a6-4caa-8db5-29805e19bdfe" containerID="2c506944ff567a8fc9b237bf9cd9ce0b04202da23b03c2481c60163839e5ba3d" exitCode=0 Oct 14 13:33:03 crc kubenswrapper[4725]: I1014 13:33:03.360700 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-qrrtq" event={"ID":"21e22c94-f7a6-4caa-8db5-29805e19bdfe","Type":"ContainerDied","Data":"2c506944ff567a8fc9b237bf9cd9ce0b04202da23b03c2481c60163839e5ba3d"} Oct 14 13:33:03 crc kubenswrapper[4725]: I1014 13:33:03.366304 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55fd7554f8-x8glb" event={"ID":"f3bff483-b10b-41fc-bf73-a04a44c16a23","Type":"ContainerStarted","Data":"55548febbe0b53a217df6cf21d68aba9a789888eac39f23434d61707961d5945"} Oct 14 13:33:03 crc kubenswrapper[4725]: I1014 13:33:03.366382 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55fd7554f8-x8glb" event={"ID":"f3bff483-b10b-41fc-bf73-a04a44c16a23","Type":"ContainerStarted","Data":"5b2fb61fd73b5e64f2359e05af6265b082d5d264ac480e6a37d56c1a7f4d0161"} Oct 14 13:33:03 crc kubenswrapper[4725]: I1014 13:33:03.366397 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-55fd7554f8-x8glb" Oct 14 13:33:03 crc kubenswrapper[4725]: I1014 13:33:03.366407 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-55fd7554f8-x8glb" Oct 14 13:33:03 crc kubenswrapper[4725]: I1014 13:33:03.369019 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0610536d-5467-4887-85ce-c4a419d93a95","Type":"ContainerStarted","Data":"378f741f53a63e435faefa4bfa35ab17188971794a5e7cc669bc0f88e1804d25"} Oct 14 13:33:03 crc kubenswrapper[4725]: I1014 13:33:03.373314 4725 generic.go:334] "Generic (PLEG): container finished" podID="063acef7-e054-4cf9-80ed-52c7a384754d" containerID="e2171f1a65477c2a1a866f2f41b85dfe2710ef7caa00a5c98632520985ff599d" exitCode=0 Oct 14 13:33:03 crc kubenswrapper[4725]: I1014 13:33:03.373477 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-9tc2g" event={"ID":"063acef7-e054-4cf9-80ed-52c7a384754d","Type":"ContainerDied","Data":"e2171f1a65477c2a1a866f2f41b85dfe2710ef7caa00a5c98632520985ff599d"} Oct 14 13:33:03 crc kubenswrapper[4725]: I1014 13:33:03.373506 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-9tc2g" event={"ID":"063acef7-e054-4cf9-80ed-52c7a384754d","Type":"ContainerStarted","Data":"c9ab03ab85c1c7c9837dfabd46b412de4573ffa2d8f13ba4cb3363796855d570"} Oct 14 13:33:03 crc kubenswrapper[4725]: I1014 13:33:03.386482 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde","Type":"ContainerStarted","Data":"7841da58d65448274e49a8f28ca64b32bd92f5cba7e255140359905d5dbae433"} Oct 14 13:33:03 crc kubenswrapper[4725]: I1014 13:33:03.966158 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-55fd7554f8-x8glb" podStartSLOduration=3.966141047 podStartE2EDuration="3.966141047s" podCreationTimestamp="2025-10-14 13:33:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:33:03.442877796 +0000 UTC m=+1100.291312605" watchObservedRunningTime="2025-10-14 13:33:03.966141047 +0000 UTC m=+1100.814575856" Oct 14 13:33:04 crc kubenswrapper[4725]: I1014 13:33:04.400858 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0610536d-5467-4887-85ce-c4a419d93a95","Type":"ContainerStarted","Data":"5d9a3f5395d7f339d83f7bdadfb545e6ffc432a0ab8f11e9b34764c37e123865"} Oct 14 13:33:04 crc kubenswrapper[4725]: I1014 13:33:04.405428 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-qrrtq" event={"ID":"21e22c94-f7a6-4caa-8db5-29805e19bdfe","Type":"ContainerDied","Data":"928aeb53870bd345592be11442cd6427797e807074b8536f11c5ffd6c5b2b177"} Oct 14 13:33:04 crc kubenswrapper[4725]: I1014 13:33:04.405506 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="928aeb53870bd345592be11442cd6427797e807074b8536f11c5ffd6c5b2b177" Oct 14 13:33:04 crc kubenswrapper[4725]: I1014 13:33:04.559762 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-qrrtq" Oct 14 13:33:04 crc kubenswrapper[4725]: I1014 13:33:04.711675 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21e22c94-f7a6-4caa-8db5-29805e19bdfe-config\") pod \"21e22c94-f7a6-4caa-8db5-29805e19bdfe\" (UID: \"21e22c94-f7a6-4caa-8db5-29805e19bdfe\") " Oct 14 13:33:04 crc kubenswrapper[4725]: I1014 13:33:04.711933 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21e22c94-f7a6-4caa-8db5-29805e19bdfe-dns-svc\") pod \"21e22c94-f7a6-4caa-8db5-29805e19bdfe\" (UID: \"21e22c94-f7a6-4caa-8db5-29805e19bdfe\") " Oct 14 13:33:04 crc kubenswrapper[4725]: I1014 13:33:04.712110 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21e22c94-f7a6-4caa-8db5-29805e19bdfe-ovsdbserver-nb\") pod \"21e22c94-f7a6-4caa-8db5-29805e19bdfe\" (UID: \"21e22c94-f7a6-4caa-8db5-29805e19bdfe\") " Oct 14 13:33:04 crc kubenswrapper[4725]: I1014 13:33:04.712266 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21e22c94-f7a6-4caa-8db5-29805e19bdfe-ovsdbserver-sb\") pod \"21e22c94-f7a6-4caa-8db5-29805e19bdfe\" (UID: \"21e22c94-f7a6-4caa-8db5-29805e19bdfe\") " Oct 14 13:33:04 crc kubenswrapper[4725]: I1014 13:33:04.712307 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkz9f\" (UniqueName: \"kubernetes.io/projected/21e22c94-f7a6-4caa-8db5-29805e19bdfe-kube-api-access-bkz9f\") pod \"21e22c94-f7a6-4caa-8db5-29805e19bdfe\" (UID: \"21e22c94-f7a6-4caa-8db5-29805e19bdfe\") " Oct 14 13:33:04 crc kubenswrapper[4725]: I1014 13:33:04.713323 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21e22c94-f7a6-4caa-8db5-29805e19bdfe-dns-swift-storage-0\") pod \"21e22c94-f7a6-4caa-8db5-29805e19bdfe\" (UID: \"21e22c94-f7a6-4caa-8db5-29805e19bdfe\") " Oct 14 13:33:04 crc kubenswrapper[4725]: I1014 13:33:04.720632 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21e22c94-f7a6-4caa-8db5-29805e19bdfe-kube-api-access-bkz9f" (OuterVolumeSpecName: "kube-api-access-bkz9f") pod "21e22c94-f7a6-4caa-8db5-29805e19bdfe" (UID: "21e22c94-f7a6-4caa-8db5-29805e19bdfe"). InnerVolumeSpecName "kube-api-access-bkz9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:33:04 crc kubenswrapper[4725]: I1014 13:33:04.758182 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21e22c94-f7a6-4caa-8db5-29805e19bdfe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "21e22c94-f7a6-4caa-8db5-29805e19bdfe" (UID: "21e22c94-f7a6-4caa-8db5-29805e19bdfe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:33:04 crc kubenswrapper[4725]: I1014 13:33:04.761376 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 14 13:33:04 crc kubenswrapper[4725]: I1014 13:33:04.777997 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21e22c94-f7a6-4caa-8db5-29805e19bdfe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "21e22c94-f7a6-4caa-8db5-29805e19bdfe" (UID: "21e22c94-f7a6-4caa-8db5-29805e19bdfe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:33:04 crc kubenswrapper[4725]: I1014 13:33:04.788125 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21e22c94-f7a6-4caa-8db5-29805e19bdfe-config" (OuterVolumeSpecName: "config") pod "21e22c94-f7a6-4caa-8db5-29805e19bdfe" (UID: "21e22c94-f7a6-4caa-8db5-29805e19bdfe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:33:04 crc kubenswrapper[4725]: I1014 13:33:04.799877 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21e22c94-f7a6-4caa-8db5-29805e19bdfe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "21e22c94-f7a6-4caa-8db5-29805e19bdfe" (UID: "21e22c94-f7a6-4caa-8db5-29805e19bdfe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:33:04 crc kubenswrapper[4725]: I1014 13:33:04.806612 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21e22c94-f7a6-4caa-8db5-29805e19bdfe-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "21e22c94-f7a6-4caa-8db5-29805e19bdfe" (UID: "21e22c94-f7a6-4caa-8db5-29805e19bdfe"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:33:04 crc kubenswrapper[4725]: I1014 13:33:04.818369 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21e22c94-f7a6-4caa-8db5-29805e19bdfe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:04 crc kubenswrapper[4725]: I1014 13:33:04.818401 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21e22c94-f7a6-4caa-8db5-29805e19bdfe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:04 crc kubenswrapper[4725]: I1014 13:33:04.818411 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkz9f\" (UniqueName: \"kubernetes.io/projected/21e22c94-f7a6-4caa-8db5-29805e19bdfe-kube-api-access-bkz9f\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:04 crc kubenswrapper[4725]: I1014 13:33:04.818436 4725 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21e22c94-f7a6-4caa-8db5-29805e19bdfe-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:04 crc kubenswrapper[4725]: I1014 13:33:04.818456 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21e22c94-f7a6-4caa-8db5-29805e19bdfe-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:04 crc kubenswrapper[4725]: I1014 13:33:04.818464 4725 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21e22c94-f7a6-4caa-8db5-29805e19bdfe-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:05 crc kubenswrapper[4725]: I1014 13:33:05.417330 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde","Type":"ContainerStarted","Data":"79c016f0d4798ff8e63082ab743f0b7872522ecc9d2b4c6587ef211104989327"} Oct 14 13:33:05 crc kubenswrapper[4725]: I1014 13:33:05.420370 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0610536d-5467-4887-85ce-c4a419d93a95","Type":"ContainerStarted","Data":"dbd14bc393fd84bcad6f07b21eb6f6efe76bf3d6b44f4f8252000d4e1ed0d102"} Oct 14 13:33:05 crc kubenswrapper[4725]: I1014 13:33:05.420464 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 14 13:33:05 crc kubenswrapper[4725]: I1014 13:33:05.420461 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0610536d-5467-4887-85ce-c4a419d93a95" containerName="cinder-api-log" containerID="cri-o://5d9a3f5395d7f339d83f7bdadfb545e6ffc432a0ab8f11e9b34764c37e123865" gracePeriod=30 Oct 14 13:33:05 crc kubenswrapper[4725]: I1014 13:33:05.420517 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0610536d-5467-4887-85ce-c4a419d93a95" containerName="cinder-api" containerID="cri-o://dbd14bc393fd84bcad6f07b21eb6f6efe76bf3d6b44f4f8252000d4e1ed0d102" gracePeriod=30 Oct 14 13:33:05 crc kubenswrapper[4725]: I1014 13:33:05.426317 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6bf494f4f9-qtck9" event={"ID":"af54da1d-ac00-444e-b85b-6f4a5d286dc6","Type":"ContainerStarted","Data":"e2774de08f37081a1114e1e320128d2381f1b0ad09f941d92379d96972046d91"} Oct 14 13:33:05 crc kubenswrapper[4725]: I1014 13:33:05.426367 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6bf494f4f9-qtck9" event={"ID":"af54da1d-ac00-444e-b85b-6f4a5d286dc6","Type":"ContainerStarted","Data":"0d3811239d5e0348e5cbcb6e7d0e5ceb162840e9b7032a3aff19d6e322164e30"} Oct 14 13:33:05 crc kubenswrapper[4725]: I1014 13:33:05.429818 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-9tc2g" event={"ID":"063acef7-e054-4cf9-80ed-52c7a384754d","Type":"ContainerStarted","Data":"2156e62902c02a39c6e55d219e7c1825aa4bf75f465431d51fad0db9eb921482"} Oct 14 13:33:05 crc kubenswrapper[4725]: I1014 13:33:05.429955 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5784cf869f-9tc2g" Oct 14 13:33:05 crc kubenswrapper[4725]: I1014 13:33:05.432219 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-qrrtq" Oct 14 13:33:05 crc kubenswrapper[4725]: I1014 13:33:05.434355 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65974cff8b-hfxck" event={"ID":"f6cdc779-b232-4adb-9a2e-1605f2ebadbf","Type":"ContainerStarted","Data":"748c0b5ec115dce46a1464a0903ec45fb22d9149e04de859a8da99b6596a2e3a"} Oct 14 13:33:05 crc kubenswrapper[4725]: I1014 13:33:05.434391 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65974cff8b-hfxck" event={"ID":"f6cdc779-b232-4adb-9a2e-1605f2ebadbf","Type":"ContainerStarted","Data":"a9f40a111082f7cedd5562404ad1f3bedea305ecee6961364ea05fd19919084b"} Oct 14 13:33:05 crc kubenswrapper[4725]: I1014 13:33:05.458017 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.457999404 podStartE2EDuration="4.457999404s" podCreationTimestamp="2025-10-14 13:33:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:33:05.440789706 +0000 UTC m=+1102.289224525" watchObservedRunningTime="2025-10-14 13:33:05.457999404 +0000 UTC m=+1102.306434213" Oct 14 13:33:05 crc kubenswrapper[4725]: I1014 13:33:05.473034 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-65974cff8b-hfxck" podStartSLOduration=2.78930972 podStartE2EDuration="5.473015462s" podCreationTimestamp="2025-10-14 13:33:00 +0000 UTC" firstStartedPulling="2025-10-14 13:33:01.684209893 +0000 UTC m=+1098.532644702" lastFinishedPulling="2025-10-14 13:33:04.367915635 +0000 UTC m=+1101.216350444" observedRunningTime="2025-10-14 13:33:05.462142066 +0000 UTC m=+1102.310576875" watchObservedRunningTime="2025-10-14 13:33:05.473015462 +0000 UTC m=+1102.321450271" Oct 14 13:33:05 crc kubenswrapper[4725]: I1014 13:33:05.532168 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5784cf869f-9tc2g" podStartSLOduration=4.5321489 podStartE2EDuration="4.5321489s" podCreationTimestamp="2025-10-14 13:33:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:33:05.513210816 +0000 UTC m=+1102.361645635" watchObservedRunningTime="2025-10-14 13:33:05.5321489 +0000 UTC m=+1102.380583709" Oct 14 13:33:05 crc kubenswrapper[4725]: I1014 13:33:05.536158 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6bf494f4f9-qtck9" podStartSLOduration=3.180012547 podStartE2EDuration="5.536138279s" podCreationTimestamp="2025-10-14 13:33:00 +0000 UTC" firstStartedPulling="2025-10-14 13:33:02.011755012 +0000 UTC m=+1098.860189821" lastFinishedPulling="2025-10-14 13:33:04.367880744 +0000 UTC m=+1101.216315553" observedRunningTime="2025-10-14 13:33:05.52698114 +0000 UTC m=+1102.375415949" watchObservedRunningTime="2025-10-14 13:33:05.536138279 +0000 UTC m=+1102.384573088" Oct 14 13:33:05 crc kubenswrapper[4725]: I1014 13:33:05.595677 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-qrrtq"] Oct 14 13:33:05 crc kubenswrapper[4725]: I1014 13:33:05.595734 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-qrrtq"] Oct 14 13:33:05 crc kubenswrapper[4725]: I1014 13:33:05.967588 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21e22c94-f7a6-4caa-8db5-29805e19bdfe" path="/var/lib/kubelet/pods/21e22c94-f7a6-4caa-8db5-29805e19bdfe/volumes" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.102345 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.251974 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0610536d-5467-4887-85ce-c4a419d93a95-combined-ca-bundle\") pod \"0610536d-5467-4887-85ce-c4a419d93a95\" (UID: \"0610536d-5467-4887-85ce-c4a419d93a95\") " Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.252023 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0610536d-5467-4887-85ce-c4a419d93a95-config-data\") pod \"0610536d-5467-4887-85ce-c4a419d93a95\" (UID: \"0610536d-5467-4887-85ce-c4a419d93a95\") " Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.252176 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0610536d-5467-4887-85ce-c4a419d93a95-scripts\") pod \"0610536d-5467-4887-85ce-c4a419d93a95\" (UID: \"0610536d-5467-4887-85ce-c4a419d93a95\") " Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.252232 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0610536d-5467-4887-85ce-c4a419d93a95-logs\") pod \"0610536d-5467-4887-85ce-c4a419d93a95\" (UID: \"0610536d-5467-4887-85ce-c4a419d93a95\") " Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.252316 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l7dv\" (UniqueName: \"kubernetes.io/projected/0610536d-5467-4887-85ce-c4a419d93a95-kube-api-access-9l7dv\") pod \"0610536d-5467-4887-85ce-c4a419d93a95\" (UID: \"0610536d-5467-4887-85ce-c4a419d93a95\") " Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.252428 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0610536d-5467-4887-85ce-c4a419d93a95-etc-machine-id\") pod \"0610536d-5467-4887-85ce-c4a419d93a95\" (UID: \"0610536d-5467-4887-85ce-c4a419d93a95\") " Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.252486 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0610536d-5467-4887-85ce-c4a419d93a95-config-data-custom\") pod \"0610536d-5467-4887-85ce-c4a419d93a95\" (UID: \"0610536d-5467-4887-85ce-c4a419d93a95\") " Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.254643 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0610536d-5467-4887-85ce-c4a419d93a95-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0610536d-5467-4887-85ce-c4a419d93a95" (UID: "0610536d-5467-4887-85ce-c4a419d93a95"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.254950 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0610536d-5467-4887-85ce-c4a419d93a95-logs" (OuterVolumeSpecName: "logs") pod "0610536d-5467-4887-85ce-c4a419d93a95" (UID: "0610536d-5467-4887-85ce-c4a419d93a95"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.259221 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0610536d-5467-4887-85ce-c4a419d93a95-kube-api-access-9l7dv" (OuterVolumeSpecName: "kube-api-access-9l7dv") pod "0610536d-5467-4887-85ce-c4a419d93a95" (UID: "0610536d-5467-4887-85ce-c4a419d93a95"). InnerVolumeSpecName "kube-api-access-9l7dv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.261706 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0610536d-5467-4887-85ce-c4a419d93a95-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0610536d-5467-4887-85ce-c4a419d93a95" (UID: "0610536d-5467-4887-85ce-c4a419d93a95"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.262646 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0610536d-5467-4887-85ce-c4a419d93a95-scripts" (OuterVolumeSpecName: "scripts") pod "0610536d-5467-4887-85ce-c4a419d93a95" (UID: "0610536d-5467-4887-85ce-c4a419d93a95"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.281441 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0610536d-5467-4887-85ce-c4a419d93a95-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0610536d-5467-4887-85ce-c4a419d93a95" (UID: "0610536d-5467-4887-85ce-c4a419d93a95"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.302434 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0610536d-5467-4887-85ce-c4a419d93a95-config-data" (OuterVolumeSpecName: "config-data") pod "0610536d-5467-4887-85ce-c4a419d93a95" (UID: "0610536d-5467-4887-85ce-c4a419d93a95"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.355321 4725 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0610536d-5467-4887-85ce-c4a419d93a95-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.355356 4725 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0610536d-5467-4887-85ce-c4a419d93a95-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.355369 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0610536d-5467-4887-85ce-c4a419d93a95-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.355381 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0610536d-5467-4887-85ce-c4a419d93a95-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.355392 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0610536d-5467-4887-85ce-c4a419d93a95-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.355402 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0610536d-5467-4887-85ce-c4a419d93a95-logs\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.355413 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l7dv\" (UniqueName: \"kubernetes.io/projected/0610536d-5467-4887-85ce-c4a419d93a95-kube-api-access-9l7dv\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.442753 4725 generic.go:334] "Generic (PLEG): container finished" podID="0610536d-5467-4887-85ce-c4a419d93a95" containerID="dbd14bc393fd84bcad6f07b21eb6f6efe76bf3d6b44f4f8252000d4e1ed0d102" exitCode=0 Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.442787 4725 generic.go:334] "Generic (PLEG): container finished" podID="0610536d-5467-4887-85ce-c4a419d93a95" containerID="5d9a3f5395d7f339d83f7bdadfb545e6ffc432a0ab8f11e9b34764c37e123865" exitCode=143 Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.442828 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0610536d-5467-4887-85ce-c4a419d93a95","Type":"ContainerDied","Data":"dbd14bc393fd84bcad6f07b21eb6f6efe76bf3d6b44f4f8252000d4e1ed0d102"} Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.442857 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0610536d-5467-4887-85ce-c4a419d93a95","Type":"ContainerDied","Data":"5d9a3f5395d7f339d83f7bdadfb545e6ffc432a0ab8f11e9b34764c37e123865"} Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.442880 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0610536d-5467-4887-85ce-c4a419d93a95","Type":"ContainerDied","Data":"378f741f53a63e435faefa4bfa35ab17188971794a5e7cc669bc0f88e1804d25"} Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.442897 4725 scope.go:117] "RemoveContainer" containerID="dbd14bc393fd84bcad6f07b21eb6f6efe76bf3d6b44f4f8252000d4e1ed0d102" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.443036 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.451410 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde","Type":"ContainerStarted","Data":"c6323b32387e1acd6f4cd5e2ba5525eabb7b455620c71b61fe9db1a5c9a82f6f"} Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.474205 4725 scope.go:117] "RemoveContainer" containerID="5d9a3f5395d7f339d83f7bdadfb545e6ffc432a0ab8f11e9b34764c37e123865" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.490776 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.541238189 podStartE2EDuration="5.490753983s" podCreationTimestamp="2025-10-14 13:33:01 +0000 UTC" firstStartedPulling="2025-10-14 13:33:02.418427102 +0000 UTC m=+1099.266861911" lastFinishedPulling="2025-10-14 13:33:04.367942896 +0000 UTC m=+1101.216377705" observedRunningTime="2025-10-14 13:33:06.469827933 +0000 UTC m=+1103.318262762" watchObservedRunningTime="2025-10-14 13:33:06.490753983 +0000 UTC m=+1103.339188792" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.501683 4725 scope.go:117] "RemoveContainer" containerID="dbd14bc393fd84bcad6f07b21eb6f6efe76bf3d6b44f4f8252000d4e1ed0d102" Oct 14 13:33:06 crc kubenswrapper[4725]: E1014 13:33:06.510389 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbd14bc393fd84bcad6f07b21eb6f6efe76bf3d6b44f4f8252000d4e1ed0d102\": container with ID starting with dbd14bc393fd84bcad6f07b21eb6f6efe76bf3d6b44f4f8252000d4e1ed0d102 not found: ID does not exist" containerID="dbd14bc393fd84bcad6f07b21eb6f6efe76bf3d6b44f4f8252000d4e1ed0d102" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.510477 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbd14bc393fd84bcad6f07b21eb6f6efe76bf3d6b44f4f8252000d4e1ed0d102"} err="failed to get container status \"dbd14bc393fd84bcad6f07b21eb6f6efe76bf3d6b44f4f8252000d4e1ed0d102\": rpc error: code = NotFound desc = could not find container \"dbd14bc393fd84bcad6f07b21eb6f6efe76bf3d6b44f4f8252000d4e1ed0d102\": container with ID starting with dbd14bc393fd84bcad6f07b21eb6f6efe76bf3d6b44f4f8252000d4e1ed0d102 not found: ID does not exist" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.510511 4725 scope.go:117] "RemoveContainer" containerID="5d9a3f5395d7f339d83f7bdadfb545e6ffc432a0ab8f11e9b34764c37e123865" Oct 14 13:33:06 crc kubenswrapper[4725]: E1014 13:33:06.511174 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d9a3f5395d7f339d83f7bdadfb545e6ffc432a0ab8f11e9b34764c37e123865\": container with ID starting with 5d9a3f5395d7f339d83f7bdadfb545e6ffc432a0ab8f11e9b34764c37e123865 not found: ID does not exist" containerID="5d9a3f5395d7f339d83f7bdadfb545e6ffc432a0ab8f11e9b34764c37e123865" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.511208 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d9a3f5395d7f339d83f7bdadfb545e6ffc432a0ab8f11e9b34764c37e123865"} err="failed to get container status \"5d9a3f5395d7f339d83f7bdadfb545e6ffc432a0ab8f11e9b34764c37e123865\": rpc error: code = NotFound desc = could not find container \"5d9a3f5395d7f339d83f7bdadfb545e6ffc432a0ab8f11e9b34764c37e123865\": container with ID starting with 5d9a3f5395d7f339d83f7bdadfb545e6ffc432a0ab8f11e9b34764c37e123865 not found: ID does not exist" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.511227 4725 scope.go:117] "RemoveContainer" containerID="dbd14bc393fd84bcad6f07b21eb6f6efe76bf3d6b44f4f8252000d4e1ed0d102" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.511955 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbd14bc393fd84bcad6f07b21eb6f6efe76bf3d6b44f4f8252000d4e1ed0d102"} err="failed to get container status \"dbd14bc393fd84bcad6f07b21eb6f6efe76bf3d6b44f4f8252000d4e1ed0d102\": rpc error: code = NotFound desc = could not find container \"dbd14bc393fd84bcad6f07b21eb6f6efe76bf3d6b44f4f8252000d4e1ed0d102\": container with ID starting with dbd14bc393fd84bcad6f07b21eb6f6efe76bf3d6b44f4f8252000d4e1ed0d102 not found: ID does not exist" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.512001 4725 scope.go:117] "RemoveContainer" containerID="5d9a3f5395d7f339d83f7bdadfb545e6ffc432a0ab8f11e9b34764c37e123865" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.512369 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d9a3f5395d7f339d83f7bdadfb545e6ffc432a0ab8f11e9b34764c37e123865"} err="failed to get container status \"5d9a3f5395d7f339d83f7bdadfb545e6ffc432a0ab8f11e9b34764c37e123865\": rpc error: code = NotFound desc = could not find container \"5d9a3f5395d7f339d83f7bdadfb545e6ffc432a0ab8f11e9b34764c37e123865\": container with ID starting with 5d9a3f5395d7f339d83f7bdadfb545e6ffc432a0ab8f11e9b34764c37e123865 not found: ID does not exist" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.518477 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.539949 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.556424 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 14 13:33:06 crc kubenswrapper[4725]: E1014 13:33:06.556874 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0610536d-5467-4887-85ce-c4a419d93a95" containerName="cinder-api" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.556891 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="0610536d-5467-4887-85ce-c4a419d93a95" containerName="cinder-api" Oct 14 13:33:06 crc kubenswrapper[4725]: E1014 13:33:06.556922 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0610536d-5467-4887-85ce-c4a419d93a95" containerName="cinder-api-log" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.556931 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="0610536d-5467-4887-85ce-c4a419d93a95" containerName="cinder-api-log" Oct 14 13:33:06 crc kubenswrapper[4725]: E1014 13:33:06.556945 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e22c94-f7a6-4caa-8db5-29805e19bdfe" containerName="init" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.556953 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e22c94-f7a6-4caa-8db5-29805e19bdfe" containerName="init" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.557170 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="0610536d-5467-4887-85ce-c4a419d93a95" containerName="cinder-api" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.557199 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="0610536d-5467-4887-85ce-c4a419d93a95" containerName="cinder-api-log" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.557212 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e22c94-f7a6-4caa-8db5-29805e19bdfe" containerName="init" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.558364 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.561497 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.561685 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.561757 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.563879 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.669434 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2909f1c8-fdcc-4565-a776-576f95ce7fa3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2909f1c8-fdcc-4565-a776-576f95ce7fa3\") " pod="openstack/cinder-api-0" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.669517 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2909f1c8-fdcc-4565-a776-576f95ce7fa3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2909f1c8-fdcc-4565-a776-576f95ce7fa3\") " pod="openstack/cinder-api-0" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.669591 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2909f1c8-fdcc-4565-a776-576f95ce7fa3-config-data-custom\") pod \"cinder-api-0\" (UID: \"2909f1c8-fdcc-4565-a776-576f95ce7fa3\") " pod="openstack/cinder-api-0" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.669619 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwdpn\" (UniqueName: \"kubernetes.io/projected/2909f1c8-fdcc-4565-a776-576f95ce7fa3-kube-api-access-cwdpn\") pod \"cinder-api-0\" (UID: \"2909f1c8-fdcc-4565-a776-576f95ce7fa3\") " pod="openstack/cinder-api-0" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.669668 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2909f1c8-fdcc-4565-a776-576f95ce7fa3-logs\") pod \"cinder-api-0\" (UID: \"2909f1c8-fdcc-4565-a776-576f95ce7fa3\") " pod="openstack/cinder-api-0" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.669742 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2909f1c8-fdcc-4565-a776-576f95ce7fa3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2909f1c8-fdcc-4565-a776-576f95ce7fa3\") " pod="openstack/cinder-api-0" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.670079 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2909f1c8-fdcc-4565-a776-576f95ce7fa3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2909f1c8-fdcc-4565-a776-576f95ce7fa3\") " pod="openstack/cinder-api-0" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.670151 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2909f1c8-fdcc-4565-a776-576f95ce7fa3-scripts\") pod \"cinder-api-0\" (UID: \"2909f1c8-fdcc-4565-a776-576f95ce7fa3\") " pod="openstack/cinder-api-0" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.670188 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2909f1c8-fdcc-4565-a776-576f95ce7fa3-config-data\") pod \"cinder-api-0\" (UID: \"2909f1c8-fdcc-4565-a776-576f95ce7fa3\") " pod="openstack/cinder-api-0" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.771305 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2909f1c8-fdcc-4565-a776-576f95ce7fa3-scripts\") pod \"cinder-api-0\" (UID: \"2909f1c8-fdcc-4565-a776-576f95ce7fa3\") " pod="openstack/cinder-api-0" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.771371 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2909f1c8-fdcc-4565-a776-576f95ce7fa3-config-data\") pod \"cinder-api-0\" (UID: \"2909f1c8-fdcc-4565-a776-576f95ce7fa3\") " pod="openstack/cinder-api-0" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.771411 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2909f1c8-fdcc-4565-a776-576f95ce7fa3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2909f1c8-fdcc-4565-a776-576f95ce7fa3\") " pod="openstack/cinder-api-0" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.771436 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2909f1c8-fdcc-4565-a776-576f95ce7fa3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2909f1c8-fdcc-4565-a776-576f95ce7fa3\") " pod="openstack/cinder-api-0" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.771492 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2909f1c8-fdcc-4565-a776-576f95ce7fa3-config-data-custom\") pod \"cinder-api-0\" (UID: \"2909f1c8-fdcc-4565-a776-576f95ce7fa3\") " pod="openstack/cinder-api-0" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.771520 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwdpn\" (UniqueName: \"kubernetes.io/projected/2909f1c8-fdcc-4565-a776-576f95ce7fa3-kube-api-access-cwdpn\") pod \"cinder-api-0\" (UID: \"2909f1c8-fdcc-4565-a776-576f95ce7fa3\") " pod="openstack/cinder-api-0" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.771551 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2909f1c8-fdcc-4565-a776-576f95ce7fa3-logs\") pod \"cinder-api-0\" (UID: \"2909f1c8-fdcc-4565-a776-576f95ce7fa3\") " pod="openstack/cinder-api-0" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.771567 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2909f1c8-fdcc-4565-a776-576f95ce7fa3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2909f1c8-fdcc-4565-a776-576f95ce7fa3\") " pod="openstack/cinder-api-0" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.771587 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2909f1c8-fdcc-4565-a776-576f95ce7fa3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2909f1c8-fdcc-4565-a776-576f95ce7fa3\") " pod="openstack/cinder-api-0" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.771617 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2909f1c8-fdcc-4565-a776-576f95ce7fa3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2909f1c8-fdcc-4565-a776-576f95ce7fa3\") " pod="openstack/cinder-api-0" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.772112 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2909f1c8-fdcc-4565-a776-576f95ce7fa3-logs\") pod \"cinder-api-0\" (UID: \"2909f1c8-fdcc-4565-a776-576f95ce7fa3\") " pod="openstack/cinder-api-0" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.776697 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2909f1c8-fdcc-4565-a776-576f95ce7fa3-scripts\") pod \"cinder-api-0\" (UID: \"2909f1c8-fdcc-4565-a776-576f95ce7fa3\") " pod="openstack/cinder-api-0" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.777608 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2909f1c8-fdcc-4565-a776-576f95ce7fa3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2909f1c8-fdcc-4565-a776-576f95ce7fa3\") " pod="openstack/cinder-api-0" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.778179 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2909f1c8-fdcc-4565-a776-576f95ce7fa3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2909f1c8-fdcc-4565-a776-576f95ce7fa3\") " pod="openstack/cinder-api-0" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.778976 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2909f1c8-fdcc-4565-a776-576f95ce7fa3-config-data-custom\") pod \"cinder-api-0\" (UID: \"2909f1c8-fdcc-4565-a776-576f95ce7fa3\") " pod="openstack/cinder-api-0" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.780587 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2909f1c8-fdcc-4565-a776-576f95ce7fa3-config-data\") pod \"cinder-api-0\" (UID: \"2909f1c8-fdcc-4565-a776-576f95ce7fa3\") " pod="openstack/cinder-api-0" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.782541 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2909f1c8-fdcc-4565-a776-576f95ce7fa3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2909f1c8-fdcc-4565-a776-576f95ce7fa3\") " pod="openstack/cinder-api-0" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.788682 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwdpn\" (UniqueName: \"kubernetes.io/projected/2909f1c8-fdcc-4565-a776-576f95ce7fa3-kube-api-access-cwdpn\") pod \"cinder-api-0\" (UID: \"2909f1c8-fdcc-4565-a776-576f95ce7fa3\") " pod="openstack/cinder-api-0" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.888076 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 14 13:33:06 crc kubenswrapper[4725]: I1014 13:33:06.904473 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.131938 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.280793 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bd1dc15-7e73-480b-9599-123a18602d5e-combined-ca-bundle\") pod \"4bd1dc15-7e73-480b-9599-123a18602d5e\" (UID: \"4bd1dc15-7e73-480b-9599-123a18602d5e\") " Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.280863 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bd1dc15-7e73-480b-9599-123a18602d5e-run-httpd\") pod \"4bd1dc15-7e73-480b-9599-123a18602d5e\" (UID: \"4bd1dc15-7e73-480b-9599-123a18602d5e\") " Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.280917 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bd1dc15-7e73-480b-9599-123a18602d5e-config-data\") pod \"4bd1dc15-7e73-480b-9599-123a18602d5e\" (UID: \"4bd1dc15-7e73-480b-9599-123a18602d5e\") " Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.280943 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bd1dc15-7e73-480b-9599-123a18602d5e-log-httpd\") pod \"4bd1dc15-7e73-480b-9599-123a18602d5e\" (UID: \"4bd1dc15-7e73-480b-9599-123a18602d5e\") " Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.280987 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bd1dc15-7e73-480b-9599-123a18602d5e-scripts\") pod \"4bd1dc15-7e73-480b-9599-123a18602d5e\" (UID: \"4bd1dc15-7e73-480b-9599-123a18602d5e\") " Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.281125 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bd1dc15-7e73-480b-9599-123a18602d5e-sg-core-conf-yaml\") pod \"4bd1dc15-7e73-480b-9599-123a18602d5e\" (UID: \"4bd1dc15-7e73-480b-9599-123a18602d5e\") " Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.281153 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq7z4\" (UniqueName: \"kubernetes.io/projected/4bd1dc15-7e73-480b-9599-123a18602d5e-kube-api-access-mq7z4\") pod \"4bd1dc15-7e73-480b-9599-123a18602d5e\" (UID: \"4bd1dc15-7e73-480b-9599-123a18602d5e\") " Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.281347 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bd1dc15-7e73-480b-9599-123a18602d5e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4bd1dc15-7e73-480b-9599-123a18602d5e" (UID: "4bd1dc15-7e73-480b-9599-123a18602d5e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.281488 4725 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bd1dc15-7e73-480b-9599-123a18602d5e-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.281995 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bd1dc15-7e73-480b-9599-123a18602d5e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4bd1dc15-7e73-480b-9599-123a18602d5e" (UID: "4bd1dc15-7e73-480b-9599-123a18602d5e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.287158 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bd1dc15-7e73-480b-9599-123a18602d5e-scripts" (OuterVolumeSpecName: "scripts") pod "4bd1dc15-7e73-480b-9599-123a18602d5e" (UID: "4bd1dc15-7e73-480b-9599-123a18602d5e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.289742 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bd1dc15-7e73-480b-9599-123a18602d5e-kube-api-access-mq7z4" (OuterVolumeSpecName: "kube-api-access-mq7z4") pod "4bd1dc15-7e73-480b-9599-123a18602d5e" (UID: "4bd1dc15-7e73-480b-9599-123a18602d5e"). InnerVolumeSpecName "kube-api-access-mq7z4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.327940 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bd1dc15-7e73-480b-9599-123a18602d5e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4bd1dc15-7e73-480b-9599-123a18602d5e" (UID: "4bd1dc15-7e73-480b-9599-123a18602d5e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.345335 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bd1dc15-7e73-480b-9599-123a18602d5e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bd1dc15-7e73-480b-9599-123a18602d5e" (UID: "4bd1dc15-7e73-480b-9599-123a18602d5e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:07 crc kubenswrapper[4725]: W1014 13:33:07.380837 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2909f1c8_fdcc_4565_a776_576f95ce7fa3.slice/crio-b86d8702b96c0a177fd1656b83afa262e4a87b8a0c54e3d29c3296a7f47e2452 WatchSource:0}: Error finding container b86d8702b96c0a177fd1656b83afa262e4a87b8a0c54e3d29c3296a7f47e2452: Status 404 returned error can't find the container with id b86d8702b96c0a177fd1656b83afa262e4a87b8a0c54e3d29c3296a7f47e2452 Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.383443 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bd1dc15-7e73-480b-9599-123a18602d5e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.383596 4725 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bd1dc15-7e73-480b-9599-123a18602d5e-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.383612 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bd1dc15-7e73-480b-9599-123a18602d5e-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.383623 4725 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bd1dc15-7e73-480b-9599-123a18602d5e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.383636 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq7z4\" (UniqueName: \"kubernetes.io/projected/4bd1dc15-7e73-480b-9599-123a18602d5e-kube-api-access-mq7z4\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.387884 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.416149 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bd1dc15-7e73-480b-9599-123a18602d5e-config-data" (OuterVolumeSpecName: "config-data") pod "4bd1dc15-7e73-480b-9599-123a18602d5e" (UID: "4bd1dc15-7e73-480b-9599-123a18602d5e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.467244 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2909f1c8-fdcc-4565-a776-576f95ce7fa3","Type":"ContainerStarted","Data":"b86d8702b96c0a177fd1656b83afa262e4a87b8a0c54e3d29c3296a7f47e2452"} Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.471525 4725 generic.go:334] "Generic (PLEG): container finished" podID="4bd1dc15-7e73-480b-9599-123a18602d5e" containerID="15f5abc47c564299170d4a6c8faefc9f3b7a4850aee85003eaf162275aceeec0" exitCode=0 Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.471580 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bd1dc15-7e73-480b-9599-123a18602d5e","Type":"ContainerDied","Data":"15f5abc47c564299170d4a6c8faefc9f3b7a4850aee85003eaf162275aceeec0"} Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.471697 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bd1dc15-7e73-480b-9599-123a18602d5e","Type":"ContainerDied","Data":"fe920be9a76a49ec86c3a822f2e9ae775b689221f73d2030071aebbecd4a8484"} Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.471724 4725 scope.go:117] "RemoveContainer" containerID="f17aa73e95cd39969a6cca6fac5506dbabe8bfa93c68346382996db209218212" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.471598 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.488128 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bd1dc15-7e73-480b-9599-123a18602d5e-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.520710 4725 scope.go:117] "RemoveContainer" containerID="055a65bd493fbbae350c73b4f7f66116107548e7b4f0511a73cfac32e1a9bc58" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.555372 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.562115 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.578251 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 14 13:33:07 crc kubenswrapper[4725]: E1014 13:33:07.582921 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bd1dc15-7e73-480b-9599-123a18602d5e" containerName="ceilometer-notification-agent" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.582950 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bd1dc15-7e73-480b-9599-123a18602d5e" containerName="ceilometer-notification-agent" Oct 14 13:33:07 crc kubenswrapper[4725]: E1014 13:33:07.582975 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bd1dc15-7e73-480b-9599-123a18602d5e" containerName="sg-core" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.582983 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bd1dc15-7e73-480b-9599-123a18602d5e" containerName="sg-core" Oct 14 13:33:07 crc kubenswrapper[4725]: E1014 13:33:07.582992 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bd1dc15-7e73-480b-9599-123a18602d5e" containerName="proxy-httpd" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.582998 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bd1dc15-7e73-480b-9599-123a18602d5e" containerName="proxy-httpd" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.583233 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bd1dc15-7e73-480b-9599-123a18602d5e" containerName="sg-core" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.583245 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bd1dc15-7e73-480b-9599-123a18602d5e" containerName="proxy-httpd" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.583266 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bd1dc15-7e73-480b-9599-123a18602d5e" containerName="ceilometer-notification-agent" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.584940 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.588023 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.600601 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.608563 4725 scope.go:117] "RemoveContainer" containerID="15f5abc47c564299170d4a6c8faefc9f3b7a4850aee85003eaf162275aceeec0" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.622704 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.656780 4725 scope.go:117] "RemoveContainer" containerID="f17aa73e95cd39969a6cca6fac5506dbabe8bfa93c68346382996db209218212" Oct 14 13:33:07 crc kubenswrapper[4725]: E1014 13:33:07.657556 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f17aa73e95cd39969a6cca6fac5506dbabe8bfa93c68346382996db209218212\": container with ID starting with f17aa73e95cd39969a6cca6fac5506dbabe8bfa93c68346382996db209218212 not found: ID does not exist" containerID="f17aa73e95cd39969a6cca6fac5506dbabe8bfa93c68346382996db209218212" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.657593 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f17aa73e95cd39969a6cca6fac5506dbabe8bfa93c68346382996db209218212"} err="failed to get container status \"f17aa73e95cd39969a6cca6fac5506dbabe8bfa93c68346382996db209218212\": rpc error: code = NotFound desc = could not find container \"f17aa73e95cd39969a6cca6fac5506dbabe8bfa93c68346382996db209218212\": container with ID starting with f17aa73e95cd39969a6cca6fac5506dbabe8bfa93c68346382996db209218212 not found: ID does not exist" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.657616 4725 scope.go:117] "RemoveContainer" containerID="055a65bd493fbbae350c73b4f7f66116107548e7b4f0511a73cfac32e1a9bc58" Oct 14 13:33:07 crc kubenswrapper[4725]: E1014 13:33:07.658403 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"055a65bd493fbbae350c73b4f7f66116107548e7b4f0511a73cfac32e1a9bc58\": container with ID starting with 055a65bd493fbbae350c73b4f7f66116107548e7b4f0511a73cfac32e1a9bc58 not found: ID does not exist" containerID="055a65bd493fbbae350c73b4f7f66116107548e7b4f0511a73cfac32e1a9bc58" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.658466 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"055a65bd493fbbae350c73b4f7f66116107548e7b4f0511a73cfac32e1a9bc58"} err="failed to get container status \"055a65bd493fbbae350c73b4f7f66116107548e7b4f0511a73cfac32e1a9bc58\": rpc error: code = NotFound desc = could not find container \"055a65bd493fbbae350c73b4f7f66116107548e7b4f0511a73cfac32e1a9bc58\": container with ID starting with 055a65bd493fbbae350c73b4f7f66116107548e7b4f0511a73cfac32e1a9bc58 not found: ID does not exist" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.658483 4725 scope.go:117] "RemoveContainer" containerID="15f5abc47c564299170d4a6c8faefc9f3b7a4850aee85003eaf162275aceeec0" Oct 14 13:33:07 crc kubenswrapper[4725]: E1014 13:33:07.658892 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15f5abc47c564299170d4a6c8faefc9f3b7a4850aee85003eaf162275aceeec0\": container with ID starting with 15f5abc47c564299170d4a6c8faefc9f3b7a4850aee85003eaf162275aceeec0 not found: ID does not exist" containerID="15f5abc47c564299170d4a6c8faefc9f3b7a4850aee85003eaf162275aceeec0" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.658926 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15f5abc47c564299170d4a6c8faefc9f3b7a4850aee85003eaf162275aceeec0"} err="failed to get container status \"15f5abc47c564299170d4a6c8faefc9f3b7a4850aee85003eaf162275aceeec0\": rpc error: code = NotFound desc = could not find container \"15f5abc47c564299170d4a6c8faefc9f3b7a4850aee85003eaf162275aceeec0\": container with ID starting with 15f5abc47c564299170d4a6c8faefc9f3b7a4850aee85003eaf162275aceeec0 not found: ID does not exist" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.692292 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49af2f07-59b3-4a96-ba8b-f100b3960827-log-httpd\") pod \"ceilometer-0\" (UID: \"49af2f07-59b3-4a96-ba8b-f100b3960827\") " pod="openstack/ceilometer-0" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.692473 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49af2f07-59b3-4a96-ba8b-f100b3960827-scripts\") pod \"ceilometer-0\" (UID: \"49af2f07-59b3-4a96-ba8b-f100b3960827\") " pod="openstack/ceilometer-0" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.692511 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6kc5\" (UniqueName: \"kubernetes.io/projected/49af2f07-59b3-4a96-ba8b-f100b3960827-kube-api-access-p6kc5\") pod \"ceilometer-0\" (UID: \"49af2f07-59b3-4a96-ba8b-f100b3960827\") " pod="openstack/ceilometer-0" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.692646 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49af2f07-59b3-4a96-ba8b-f100b3960827-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"49af2f07-59b3-4a96-ba8b-f100b3960827\") " pod="openstack/ceilometer-0" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.693511 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49af2f07-59b3-4a96-ba8b-f100b3960827-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"49af2f07-59b3-4a96-ba8b-f100b3960827\") " pod="openstack/ceilometer-0" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.693654 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49af2f07-59b3-4a96-ba8b-f100b3960827-run-httpd\") pod \"ceilometer-0\" (UID: \"49af2f07-59b3-4a96-ba8b-f100b3960827\") " pod="openstack/ceilometer-0" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.693691 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49af2f07-59b3-4a96-ba8b-f100b3960827-config-data\") pod \"ceilometer-0\" (UID: \"49af2f07-59b3-4a96-ba8b-f100b3960827\") " pod="openstack/ceilometer-0" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.795725 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49af2f07-59b3-4a96-ba8b-f100b3960827-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"49af2f07-59b3-4a96-ba8b-f100b3960827\") " pod="openstack/ceilometer-0" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.795816 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49af2f07-59b3-4a96-ba8b-f100b3960827-run-httpd\") pod \"ceilometer-0\" (UID: \"49af2f07-59b3-4a96-ba8b-f100b3960827\") " pod="openstack/ceilometer-0" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.795850 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49af2f07-59b3-4a96-ba8b-f100b3960827-config-data\") pod \"ceilometer-0\" (UID: \"49af2f07-59b3-4a96-ba8b-f100b3960827\") " pod="openstack/ceilometer-0" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.795915 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49af2f07-59b3-4a96-ba8b-f100b3960827-log-httpd\") pod \"ceilometer-0\" (UID: \"49af2f07-59b3-4a96-ba8b-f100b3960827\") " pod="openstack/ceilometer-0" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.795950 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49af2f07-59b3-4a96-ba8b-f100b3960827-scripts\") pod \"ceilometer-0\" (UID: \"49af2f07-59b3-4a96-ba8b-f100b3960827\") " pod="openstack/ceilometer-0" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.795972 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6kc5\" (UniqueName: \"kubernetes.io/projected/49af2f07-59b3-4a96-ba8b-f100b3960827-kube-api-access-p6kc5\") pod \"ceilometer-0\" (UID: \"49af2f07-59b3-4a96-ba8b-f100b3960827\") " pod="openstack/ceilometer-0" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.796040 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49af2f07-59b3-4a96-ba8b-f100b3960827-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"49af2f07-59b3-4a96-ba8b-f100b3960827\") " pod="openstack/ceilometer-0" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.796798 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49af2f07-59b3-4a96-ba8b-f100b3960827-run-httpd\") pod \"ceilometer-0\" (UID: \"49af2f07-59b3-4a96-ba8b-f100b3960827\") " pod="openstack/ceilometer-0" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.797237 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49af2f07-59b3-4a96-ba8b-f100b3960827-log-httpd\") pod \"ceilometer-0\" (UID: \"49af2f07-59b3-4a96-ba8b-f100b3960827\") " pod="openstack/ceilometer-0" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.800015 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49af2f07-59b3-4a96-ba8b-f100b3960827-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"49af2f07-59b3-4a96-ba8b-f100b3960827\") " pod="openstack/ceilometer-0" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.800421 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49af2f07-59b3-4a96-ba8b-f100b3960827-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"49af2f07-59b3-4a96-ba8b-f100b3960827\") " pod="openstack/ceilometer-0" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.814910 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6kc5\" (UniqueName: \"kubernetes.io/projected/49af2f07-59b3-4a96-ba8b-f100b3960827-kube-api-access-p6kc5\") pod \"ceilometer-0\" (UID: \"49af2f07-59b3-4a96-ba8b-f100b3960827\") " pod="openstack/ceilometer-0" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.832962 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49af2f07-59b3-4a96-ba8b-f100b3960827-scripts\") pod \"ceilometer-0\" (UID: \"49af2f07-59b3-4a96-ba8b-f100b3960827\") " pod="openstack/ceilometer-0" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.857761 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49af2f07-59b3-4a96-ba8b-f100b3960827-config-data\") pod \"ceilometer-0\" (UID: \"49af2f07-59b3-4a96-ba8b-f100b3960827\") " pod="openstack/ceilometer-0" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.889777 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5b8b4d8db-n47v4"] Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.891743 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b8b4d8db-n47v4" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.895989 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.896023 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.897116 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6497a894-212e-4478-844f-8e401f1de8fa-logs\") pod \"barbican-api-5b8b4d8db-n47v4\" (UID: \"6497a894-212e-4478-844f-8e401f1de8fa\") " pod="openstack/barbican-api-5b8b4d8db-n47v4" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.897146 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc7nc\" (UniqueName: \"kubernetes.io/projected/6497a894-212e-4478-844f-8e401f1de8fa-kube-api-access-jc7nc\") pod \"barbican-api-5b8b4d8db-n47v4\" (UID: \"6497a894-212e-4478-844f-8e401f1de8fa\") " pod="openstack/barbican-api-5b8b4d8db-n47v4" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.897207 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6497a894-212e-4478-844f-8e401f1de8fa-config-data\") pod \"barbican-api-5b8b4d8db-n47v4\" (UID: \"6497a894-212e-4478-844f-8e401f1de8fa\") " pod="openstack/barbican-api-5b8b4d8db-n47v4" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.897283 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6497a894-212e-4478-844f-8e401f1de8fa-config-data-custom\") pod \"barbican-api-5b8b4d8db-n47v4\" (UID: \"6497a894-212e-4478-844f-8e401f1de8fa\") " pod="openstack/barbican-api-5b8b4d8db-n47v4" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.897306 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6497a894-212e-4478-844f-8e401f1de8fa-internal-tls-certs\") pod \"barbican-api-5b8b4d8db-n47v4\" (UID: \"6497a894-212e-4478-844f-8e401f1de8fa\") " pod="openstack/barbican-api-5b8b4d8db-n47v4" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.897351 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6497a894-212e-4478-844f-8e401f1de8fa-public-tls-certs\") pod \"barbican-api-5b8b4d8db-n47v4\" (UID: \"6497a894-212e-4478-844f-8e401f1de8fa\") " pod="openstack/barbican-api-5b8b4d8db-n47v4" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.897372 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6497a894-212e-4478-844f-8e401f1de8fa-combined-ca-bundle\") pod \"barbican-api-5b8b4d8db-n47v4\" (UID: \"6497a894-212e-4478-844f-8e401f1de8fa\") " pod="openstack/barbican-api-5b8b4d8db-n47v4" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.911710 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b8b4d8db-n47v4"] Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.920234 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.955895 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0610536d-5467-4887-85ce-c4a419d93a95" path="/var/lib/kubelet/pods/0610536d-5467-4887-85ce-c4a419d93a95/volumes" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.956923 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bd1dc15-7e73-480b-9599-123a18602d5e" path="/var/lib/kubelet/pods/4bd1dc15-7e73-480b-9599-123a18602d5e/volumes" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.999282 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6497a894-212e-4478-844f-8e401f1de8fa-config-data\") pod \"barbican-api-5b8b4d8db-n47v4\" (UID: \"6497a894-212e-4478-844f-8e401f1de8fa\") " pod="openstack/barbican-api-5b8b4d8db-n47v4" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.999415 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6497a894-212e-4478-844f-8e401f1de8fa-config-data-custom\") pod \"barbican-api-5b8b4d8db-n47v4\" (UID: \"6497a894-212e-4478-844f-8e401f1de8fa\") " pod="openstack/barbican-api-5b8b4d8db-n47v4" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.999471 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6497a894-212e-4478-844f-8e401f1de8fa-internal-tls-certs\") pod \"barbican-api-5b8b4d8db-n47v4\" (UID: \"6497a894-212e-4478-844f-8e401f1de8fa\") " pod="openstack/barbican-api-5b8b4d8db-n47v4" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.999537 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6497a894-212e-4478-844f-8e401f1de8fa-public-tls-certs\") pod \"barbican-api-5b8b4d8db-n47v4\" (UID: \"6497a894-212e-4478-844f-8e401f1de8fa\") " pod="openstack/barbican-api-5b8b4d8db-n47v4" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.999566 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6497a894-212e-4478-844f-8e401f1de8fa-combined-ca-bundle\") pod \"barbican-api-5b8b4d8db-n47v4\" (UID: \"6497a894-212e-4478-844f-8e401f1de8fa\") " pod="openstack/barbican-api-5b8b4d8db-n47v4" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.999590 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6497a894-212e-4478-844f-8e401f1de8fa-logs\") pod \"barbican-api-5b8b4d8db-n47v4\" (UID: \"6497a894-212e-4478-844f-8e401f1de8fa\") " pod="openstack/barbican-api-5b8b4d8db-n47v4" Oct 14 13:33:07 crc kubenswrapper[4725]: I1014 13:33:07.999654 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc7nc\" (UniqueName: \"kubernetes.io/projected/6497a894-212e-4478-844f-8e401f1de8fa-kube-api-access-jc7nc\") pod \"barbican-api-5b8b4d8db-n47v4\" (UID: \"6497a894-212e-4478-844f-8e401f1de8fa\") " pod="openstack/barbican-api-5b8b4d8db-n47v4" Oct 14 13:33:08 crc kubenswrapper[4725]: I1014 13:33:08.003420 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6497a894-212e-4478-844f-8e401f1de8fa-logs\") pod \"barbican-api-5b8b4d8db-n47v4\" (UID: \"6497a894-212e-4478-844f-8e401f1de8fa\") " pod="openstack/barbican-api-5b8b4d8db-n47v4" Oct 14 13:33:08 crc kubenswrapper[4725]: I1014 13:33:08.005662 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6497a894-212e-4478-844f-8e401f1de8fa-internal-tls-certs\") pod \"barbican-api-5b8b4d8db-n47v4\" (UID: \"6497a894-212e-4478-844f-8e401f1de8fa\") " pod="openstack/barbican-api-5b8b4d8db-n47v4" Oct 14 13:33:08 crc kubenswrapper[4725]: I1014 13:33:08.006666 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6497a894-212e-4478-844f-8e401f1de8fa-config-data\") pod \"barbican-api-5b8b4d8db-n47v4\" (UID: \"6497a894-212e-4478-844f-8e401f1de8fa\") " pod="openstack/barbican-api-5b8b4d8db-n47v4" Oct 14 13:33:08 crc kubenswrapper[4725]: I1014 13:33:08.007436 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6497a894-212e-4478-844f-8e401f1de8fa-combined-ca-bundle\") pod \"barbican-api-5b8b4d8db-n47v4\" (UID: \"6497a894-212e-4478-844f-8e401f1de8fa\") " pod="openstack/barbican-api-5b8b4d8db-n47v4" Oct 14 13:33:08 crc kubenswrapper[4725]: I1014 13:33:08.007440 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6497a894-212e-4478-844f-8e401f1de8fa-config-data-custom\") pod \"barbican-api-5b8b4d8db-n47v4\" (UID: \"6497a894-212e-4478-844f-8e401f1de8fa\") " pod="openstack/barbican-api-5b8b4d8db-n47v4" Oct 14 13:33:08 crc kubenswrapper[4725]: I1014 13:33:08.007966 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6497a894-212e-4478-844f-8e401f1de8fa-public-tls-certs\") pod \"barbican-api-5b8b4d8db-n47v4\" (UID: \"6497a894-212e-4478-844f-8e401f1de8fa\") " pod="openstack/barbican-api-5b8b4d8db-n47v4" Oct 14 13:33:08 crc kubenswrapper[4725]: I1014 13:33:08.019574 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc7nc\" (UniqueName: \"kubernetes.io/projected/6497a894-212e-4478-844f-8e401f1de8fa-kube-api-access-jc7nc\") pod \"barbican-api-5b8b4d8db-n47v4\" (UID: \"6497a894-212e-4478-844f-8e401f1de8fa\") " pod="openstack/barbican-api-5b8b4d8db-n47v4" Oct 14 13:33:08 crc kubenswrapper[4725]: I1014 13:33:08.266231 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b8b4d8db-n47v4" Oct 14 13:33:08 crc kubenswrapper[4725]: I1014 13:33:08.459193 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 13:33:08 crc kubenswrapper[4725]: W1014 13:33:08.502712 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49af2f07_59b3_4a96_ba8b_f100b3960827.slice/crio-b7db0573102e2e7483921b0aee891111358056e031f84cafc90d330a9fa179c2 WatchSource:0}: Error finding container b7db0573102e2e7483921b0aee891111358056e031f84cafc90d330a9fa179c2: Status 404 returned error can't find the container with id b7db0573102e2e7483921b0aee891111358056e031f84cafc90d330a9fa179c2 Oct 14 13:33:08 crc kubenswrapper[4725]: I1014 13:33:08.537046 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2909f1c8-fdcc-4565-a776-576f95ce7fa3","Type":"ContainerStarted","Data":"f89726f54e1dd72b71c9ba2b414b849a3e16c6151c92df53a2e36b91d8f77048"} Oct 14 13:33:08 crc kubenswrapper[4725]: I1014 13:33:08.743112 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b8b4d8db-n47v4"] Oct 14 13:33:08 crc kubenswrapper[4725]: W1014 13:33:08.757071 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6497a894_212e_4478_844f_8e401f1de8fa.slice/crio-1c33f2f70c556d7ea277d01269c6a762dd604c320aab76c6b683b4fe5fae9154 WatchSource:0}: Error finding container 1c33f2f70c556d7ea277d01269c6a762dd604c320aab76c6b683b4fe5fae9154: Status 404 returned error can't find the container with id 1c33f2f70c556d7ea277d01269c6a762dd604c320aab76c6b683b4fe5fae9154 Oct 14 13:33:09 crc kubenswrapper[4725]: I1014 13:33:09.546558 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b8b4d8db-n47v4" event={"ID":"6497a894-212e-4478-844f-8e401f1de8fa","Type":"ContainerStarted","Data":"f794c0a14e3fd71bd6ef57b6d28f7fbd80980abcea582632f08296315c93dc88"} Oct 14 13:33:09 crc kubenswrapper[4725]: I1014 13:33:09.547125 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b8b4d8db-n47v4" event={"ID":"6497a894-212e-4478-844f-8e401f1de8fa","Type":"ContainerStarted","Data":"1c33f2f70c556d7ea277d01269c6a762dd604c320aab76c6b683b4fe5fae9154"} Oct 14 13:33:09 crc kubenswrapper[4725]: I1014 13:33:09.547594 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49af2f07-59b3-4a96-ba8b-f100b3960827","Type":"ContainerStarted","Data":"b7db0573102e2e7483921b0aee891111358056e031f84cafc90d330a9fa179c2"} Oct 14 13:33:09 crc kubenswrapper[4725]: I1014 13:33:09.550062 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2909f1c8-fdcc-4565-a776-576f95ce7fa3","Type":"ContainerStarted","Data":"8edd1f9f4202ed9566a4b84085f7214fde48a1520682893cb0ad4783788145d1"} Oct 14 13:33:09 crc kubenswrapper[4725]: I1014 13:33:09.551165 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 14 13:33:09 crc kubenswrapper[4725]: I1014 13:33:09.578382 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.57819053 podStartE2EDuration="3.57819053s" podCreationTimestamp="2025-10-14 13:33:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:33:09.570474777 +0000 UTC m=+1106.418909586" watchObservedRunningTime="2025-10-14 13:33:09.57819053 +0000 UTC m=+1106.426625339" Oct 14 13:33:10 crc kubenswrapper[4725]: I1014 13:33:10.566209 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b8b4d8db-n47v4" event={"ID":"6497a894-212e-4478-844f-8e401f1de8fa","Type":"ContainerStarted","Data":"8d52144cbb4fe058a016af78b46aa8773dfeb92268c3bb4d3819c230a6d0894e"} Oct 14 13:33:10 crc kubenswrapper[4725]: I1014 13:33:10.566806 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b8b4d8db-n47v4" Oct 14 13:33:10 crc kubenswrapper[4725]: I1014 13:33:10.578263 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49af2f07-59b3-4a96-ba8b-f100b3960827","Type":"ContainerStarted","Data":"b1a749b85fa4505ee589e3ea59842a461366e82595a3889dc65b50c43e289bad"} Oct 14 13:33:10 crc kubenswrapper[4725]: I1014 13:33:10.578321 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49af2f07-59b3-4a96-ba8b-f100b3960827","Type":"ContainerStarted","Data":"d63c41cd88f500c01fe387c0ed9f01130d0069e0512e105bc63a28d3aa88bfa4"} Oct 14 13:33:10 crc kubenswrapper[4725]: I1014 13:33:10.596113 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5b8b4d8db-n47v4" podStartSLOduration=3.59609375 podStartE2EDuration="3.59609375s" podCreationTimestamp="2025-10-14 13:33:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:33:10.592141241 +0000 UTC m=+1107.440576050" watchObservedRunningTime="2025-10-14 13:33:10.59609375 +0000 UTC m=+1107.444528559" Oct 14 13:33:11 crc kubenswrapper[4725]: I1014 13:33:11.591054 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49af2f07-59b3-4a96-ba8b-f100b3960827","Type":"ContainerStarted","Data":"752db4880d002ef25970c98ed7b1733548fcc79d7452da56af27fb04dbdd1ea7"} Oct 14 13:33:11 crc kubenswrapper[4725]: I1014 13:33:11.591382 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b8b4d8db-n47v4" Oct 14 13:33:11 crc kubenswrapper[4725]: I1014 13:33:11.610505 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5cb9f8bc48-bkgjx" Oct 14 13:33:11 crc kubenswrapper[4725]: I1014 13:33:11.786680 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-9d5c84b44-vssnm" Oct 14 13:33:12 crc kubenswrapper[4725]: I1014 13:33:12.051592 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5784cf869f-9tc2g" Oct 14 13:33:12 crc kubenswrapper[4725]: I1014 13:33:12.134187 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-h66x8"] Oct 14 13:33:12 crc kubenswrapper[4725]: I1014 13:33:12.138486 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84b966f6c9-h66x8" podUID="276146f9-c73f-4f38-b4b2-bba0a70221fe" containerName="dnsmasq-dns" containerID="cri-o://f12bcf64a929fba6d482babdf453939e192c56ab2f17357aceac35899c9a3ab5" gracePeriod=10 Oct 14 13:33:12 crc kubenswrapper[4725]: I1014 13:33:12.409935 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 14 13:33:12 crc kubenswrapper[4725]: I1014 13:33:12.457512 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7cdf854644-xbv6p" Oct 14 13:33:12 crc kubenswrapper[4725]: I1014 13:33:12.479306 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 13:33:12 crc kubenswrapper[4725]: I1014 13:33:12.622013 4725 generic.go:334] "Generic (PLEG): container finished" podID="276146f9-c73f-4f38-b4b2-bba0a70221fe" containerID="f12bcf64a929fba6d482babdf453939e192c56ab2f17357aceac35899c9a3ab5" exitCode=0 Oct 14 13:33:12 crc kubenswrapper[4725]: I1014 13:33:12.622520 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-h66x8" event={"ID":"276146f9-c73f-4f38-b4b2-bba0a70221fe","Type":"ContainerDied","Data":"f12bcf64a929fba6d482babdf453939e192c56ab2f17357aceac35899c9a3ab5"} Oct 14 13:33:12 crc kubenswrapper[4725]: I1014 13:33:12.622563 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="af8fc3f5-e7e3-4328-85ce-1d6aa110cfde" containerName="cinder-scheduler" containerID="cri-o://79c016f0d4798ff8e63082ab743f0b7872522ecc9d2b4c6587ef211104989327" gracePeriod=30 Oct 14 13:33:12 crc kubenswrapper[4725]: I1014 13:33:12.622655 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="af8fc3f5-e7e3-4328-85ce-1d6aa110cfde" containerName="probe" containerID="cri-o://c6323b32387e1acd6f4cd5e2ba5525eabb7b455620c71b61fe9db1a5c9a82f6f" gracePeriod=30 Oct 14 13:33:12 crc kubenswrapper[4725]: I1014 13:33:12.713320 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-h66x8" Oct 14 13:33:12 crc kubenswrapper[4725]: I1014 13:33:12.894703 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/276146f9-c73f-4f38-b4b2-bba0a70221fe-ovsdbserver-sb\") pod \"276146f9-c73f-4f38-b4b2-bba0a70221fe\" (UID: \"276146f9-c73f-4f38-b4b2-bba0a70221fe\") " Oct 14 13:33:12 crc kubenswrapper[4725]: I1014 13:33:12.894785 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/276146f9-c73f-4f38-b4b2-bba0a70221fe-config\") pod \"276146f9-c73f-4f38-b4b2-bba0a70221fe\" (UID: \"276146f9-c73f-4f38-b4b2-bba0a70221fe\") " Oct 14 13:33:12 crc kubenswrapper[4725]: I1014 13:33:12.894855 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69zss\" (UniqueName: \"kubernetes.io/projected/276146f9-c73f-4f38-b4b2-bba0a70221fe-kube-api-access-69zss\") pod \"276146f9-c73f-4f38-b4b2-bba0a70221fe\" (UID: \"276146f9-c73f-4f38-b4b2-bba0a70221fe\") " Oct 14 13:33:12 crc kubenswrapper[4725]: I1014 13:33:12.894870 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/276146f9-c73f-4f38-b4b2-bba0a70221fe-ovsdbserver-nb\") pod \"276146f9-c73f-4f38-b4b2-bba0a70221fe\" (UID: \"276146f9-c73f-4f38-b4b2-bba0a70221fe\") " Oct 14 13:33:12 crc kubenswrapper[4725]: I1014 13:33:12.894905 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/276146f9-c73f-4f38-b4b2-bba0a70221fe-dns-svc\") pod \"276146f9-c73f-4f38-b4b2-bba0a70221fe\" (UID: \"276146f9-c73f-4f38-b4b2-bba0a70221fe\") " Oct 14 13:33:12 crc kubenswrapper[4725]: I1014 13:33:12.894929 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/276146f9-c73f-4f38-b4b2-bba0a70221fe-dns-swift-storage-0\") pod \"276146f9-c73f-4f38-b4b2-bba0a70221fe\" (UID: \"276146f9-c73f-4f38-b4b2-bba0a70221fe\") " Oct 14 13:33:12 crc kubenswrapper[4725]: I1014 13:33:12.909090 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/276146f9-c73f-4f38-b4b2-bba0a70221fe-kube-api-access-69zss" (OuterVolumeSpecName: "kube-api-access-69zss") pod "276146f9-c73f-4f38-b4b2-bba0a70221fe" (UID: "276146f9-c73f-4f38-b4b2-bba0a70221fe"). InnerVolumeSpecName "kube-api-access-69zss". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:33:12 crc kubenswrapper[4725]: I1014 13:33:12.968168 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/276146f9-c73f-4f38-b4b2-bba0a70221fe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "276146f9-c73f-4f38-b4b2-bba0a70221fe" (UID: "276146f9-c73f-4f38-b4b2-bba0a70221fe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:33:12 crc kubenswrapper[4725]: I1014 13:33:12.972745 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/276146f9-c73f-4f38-b4b2-bba0a70221fe-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "276146f9-c73f-4f38-b4b2-bba0a70221fe" (UID: "276146f9-c73f-4f38-b4b2-bba0a70221fe"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:33:12 crc kubenswrapper[4725]: I1014 13:33:12.977178 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/276146f9-c73f-4f38-b4b2-bba0a70221fe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "276146f9-c73f-4f38-b4b2-bba0a70221fe" (UID: "276146f9-c73f-4f38-b4b2-bba0a70221fe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:33:12 crc kubenswrapper[4725]: I1014 13:33:12.977910 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/276146f9-c73f-4f38-b4b2-bba0a70221fe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "276146f9-c73f-4f38-b4b2-bba0a70221fe" (UID: "276146f9-c73f-4f38-b4b2-bba0a70221fe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:33:12 crc kubenswrapper[4725]: I1014 13:33:12.980911 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/276146f9-c73f-4f38-b4b2-bba0a70221fe-config" (OuterVolumeSpecName: "config") pod "276146f9-c73f-4f38-b4b2-bba0a70221fe" (UID: "276146f9-c73f-4f38-b4b2-bba0a70221fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:33:12 crc kubenswrapper[4725]: I1014 13:33:12.996842 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69zss\" (UniqueName: \"kubernetes.io/projected/276146f9-c73f-4f38-b4b2-bba0a70221fe-kube-api-access-69zss\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:12 crc kubenswrapper[4725]: I1014 13:33:12.996882 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/276146f9-c73f-4f38-b4b2-bba0a70221fe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:12 crc kubenswrapper[4725]: I1014 13:33:12.996894 4725 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/276146f9-c73f-4f38-b4b2-bba0a70221fe-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:12 crc kubenswrapper[4725]: I1014 13:33:12.996904 4725 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/276146f9-c73f-4f38-b4b2-bba0a70221fe-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:12 crc kubenswrapper[4725]: I1014 13:33:12.996913 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/276146f9-c73f-4f38-b4b2-bba0a70221fe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:12 crc kubenswrapper[4725]: I1014 13:33:12.996922 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/276146f9-c73f-4f38-b4b2-bba0a70221fe-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:13 crc kubenswrapper[4725]: I1014 13:33:13.276025 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-55fd7554f8-x8glb" Oct 14 13:33:13 crc kubenswrapper[4725]: I1014 13:33:13.292700 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-55fd7554f8-x8glb" Oct 14 13:33:13 crc kubenswrapper[4725]: I1014 13:33:13.647665 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49af2f07-59b3-4a96-ba8b-f100b3960827","Type":"ContainerStarted","Data":"ac1bef92ccd0fb887f55a931fa9763db4401c5dbc00689035ce73b613f593637"} Oct 14 13:33:13 crc kubenswrapper[4725]: I1014 13:33:13.648423 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 14 13:33:13 crc kubenswrapper[4725]: I1014 13:33:13.649982 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-h66x8" event={"ID":"276146f9-c73f-4f38-b4b2-bba0a70221fe","Type":"ContainerDied","Data":"9a428f539459c969831612f64240fca06b9653881fb5ae5255e65f640b40eb4b"} Oct 14 13:33:13 crc kubenswrapper[4725]: I1014 13:33:13.650031 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-h66x8" Oct 14 13:33:13 crc kubenswrapper[4725]: I1014 13:33:13.650032 4725 scope.go:117] "RemoveContainer" containerID="f12bcf64a929fba6d482babdf453939e192c56ab2f17357aceac35899c9a3ab5" Oct 14 13:33:13 crc kubenswrapper[4725]: I1014 13:33:13.653922 4725 generic.go:334] "Generic (PLEG): container finished" podID="af8fc3f5-e7e3-4328-85ce-1d6aa110cfde" containerID="c6323b32387e1acd6f4cd5e2ba5525eabb7b455620c71b61fe9db1a5c9a82f6f" exitCode=0 Oct 14 13:33:13 crc kubenswrapper[4725]: I1014 13:33:13.655198 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde","Type":"ContainerDied","Data":"c6323b32387e1acd6f4cd5e2ba5525eabb7b455620c71b61fe9db1a5c9a82f6f"} Oct 14 13:33:13 crc kubenswrapper[4725]: I1014 13:33:13.692156 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.800330283 podStartE2EDuration="6.692130852s" podCreationTimestamp="2025-10-14 13:33:07 +0000 UTC" firstStartedPulling="2025-10-14 13:33:08.51329193 +0000 UTC m=+1105.361726739" lastFinishedPulling="2025-10-14 13:33:12.405092499 +0000 UTC m=+1109.253527308" observedRunningTime="2025-10-14 13:33:13.68517436 +0000 UTC m=+1110.533609179" watchObservedRunningTime="2025-10-14 13:33:13.692130852 +0000 UTC m=+1110.540565671" Oct 14 13:33:13 crc kubenswrapper[4725]: I1014 13:33:13.704836 4725 scope.go:117] "RemoveContainer" containerID="f735e4309f7a3c3f9c12ec2ca58a134eb2651ae0ed8c85f580f485f24c4005d3" Oct 14 13:33:13 crc kubenswrapper[4725]: I1014 13:33:13.707421 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-h66x8"] Oct 14 13:33:13 crc kubenswrapper[4725]: I1014 13:33:13.713483 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-h66x8"] Oct 14 13:33:13 crc kubenswrapper[4725]: I1014 13:33:13.834858 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-846bc7f557-srw79" Oct 14 13:33:13 crc kubenswrapper[4725]: I1014 13:33:13.921166 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5cb9f8bc48-bkgjx"] Oct 14 13:33:13 crc kubenswrapper[4725]: I1014 13:33:13.921380 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5cb9f8bc48-bkgjx" podUID="caca4422-132e-491b-8070-f4e9c4c8ff3a" containerName="neutron-api" containerID="cri-o://0b237005908550424cbfafee9219b4674ac4fe85010dc7948b6fa9cd6ebb7209" gracePeriod=30 Oct 14 13:33:13 crc kubenswrapper[4725]: I1014 13:33:13.921865 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5cb9f8bc48-bkgjx" podUID="caca4422-132e-491b-8070-f4e9c4c8ff3a" containerName="neutron-httpd" containerID="cri-o://7f97d1174c63c7d8688086c1940ffcbb232b5708659c5ff5d87effa0e47016b1" gracePeriod=30 Oct 14 13:33:13 crc kubenswrapper[4725]: I1014 13:33:13.939037 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="276146f9-c73f-4f38-b4b2-bba0a70221fe" path="/var/lib/kubelet/pods/276146f9-c73f-4f38-b4b2-bba0a70221fe/volumes" Oct 14 13:33:13 crc kubenswrapper[4725]: I1014 13:33:13.946720 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-9d5c84b44-vssnm" Oct 14 13:33:14 crc kubenswrapper[4725]: I1014 13:33:14.543818 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7cdf854644-xbv6p" Oct 14 13:33:14 crc kubenswrapper[4725]: I1014 13:33:14.616219 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-9d5c84b44-vssnm"] Oct 14 13:33:14 crc kubenswrapper[4725]: I1014 13:33:14.665751 4725 generic.go:334] "Generic (PLEG): container finished" podID="caca4422-132e-491b-8070-f4e9c4c8ff3a" containerID="7f97d1174c63c7d8688086c1940ffcbb232b5708659c5ff5d87effa0e47016b1" exitCode=0 Oct 14 13:33:14 crc kubenswrapper[4725]: I1014 13:33:14.665817 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cb9f8bc48-bkgjx" event={"ID":"caca4422-132e-491b-8070-f4e9c4c8ff3a","Type":"ContainerDied","Data":"7f97d1174c63c7d8688086c1940ffcbb232b5708659c5ff5d87effa0e47016b1"} Oct 14 13:33:14 crc kubenswrapper[4725]: I1014 13:33:14.667126 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-9d5c84b44-vssnm" podUID="551500de-77a9-4f28-ab4b-b8259e04804b" containerName="horizon-log" containerID="cri-o://12820d3bbd64cf010de74b038f5a42c61026b19b4e28f3044936bcaf135f4e48" gracePeriod=30 Oct 14 13:33:14 crc kubenswrapper[4725]: I1014 13:33:14.667396 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-9d5c84b44-vssnm" podUID="551500de-77a9-4f28-ab4b-b8259e04804b" containerName="horizon" containerID="cri-o://2f06fee29194370b1ad8593cc12b848a47dce9aefe7a38963cecbdc075b21a71" gracePeriod=30 Oct 14 13:33:16 crc kubenswrapper[4725]: I1014 13:33:16.300236 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85547dff5b-c66wz" Oct 14 13:33:16 crc kubenswrapper[4725]: I1014 13:33:16.302445 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-85547dff5b-c66wz" Oct 14 13:33:16 crc kubenswrapper[4725]: I1014 13:33:16.684310 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde","Type":"ContainerDied","Data":"79c016f0d4798ff8e63082ab743f0b7872522ecc9d2b4c6587ef211104989327"} Oct 14 13:33:16 crc kubenswrapper[4725]: I1014 13:33:16.684233 4725 generic.go:334] "Generic (PLEG): container finished" podID="af8fc3f5-e7e3-4328-85ce-1d6aa110cfde" containerID="79c016f0d4798ff8e63082ab743f0b7872522ecc9d2b4c6587ef211104989327" exitCode=0 Oct 14 13:33:16 crc kubenswrapper[4725]: I1014 13:33:16.684838 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde","Type":"ContainerDied","Data":"7841da58d65448274e49a8f28ca64b32bd92f5cba7e255140359905d5dbae433"} Oct 14 13:33:16 crc kubenswrapper[4725]: I1014 13:33:16.684877 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7841da58d65448274e49a8f28ca64b32bd92f5cba7e255140359905d5dbae433" Oct 14 13:33:16 crc kubenswrapper[4725]: I1014 13:33:16.753534 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 14 13:33:16 crc kubenswrapper[4725]: I1014 13:33:16.940000 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-scripts\") pod \"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde\" (UID: \"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde\") " Oct 14 13:33:16 crc kubenswrapper[4725]: I1014 13:33:16.940066 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-config-data-custom\") pod \"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde\" (UID: \"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde\") " Oct 14 13:33:16 crc kubenswrapper[4725]: I1014 13:33:16.940121 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjsqx\" (UniqueName: \"kubernetes.io/projected/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-kube-api-access-vjsqx\") pod \"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde\" (UID: \"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde\") " Oct 14 13:33:16 crc kubenswrapper[4725]: I1014 13:33:16.940178 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-config-data\") pod \"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde\" (UID: \"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde\") " Oct 14 13:33:16 crc kubenswrapper[4725]: I1014 13:33:16.940234 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-combined-ca-bundle\") pod \"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde\" (UID: \"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde\") " Oct 14 13:33:16 crc kubenswrapper[4725]: I1014 13:33:16.940264 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-etc-machine-id\") pod \"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde\" (UID: \"af8fc3f5-e7e3-4328-85ce-1d6aa110cfde\") " Oct 14 13:33:16 crc kubenswrapper[4725]: I1014 13:33:16.942561 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "af8fc3f5-e7e3-4328-85ce-1d6aa110cfde" (UID: "af8fc3f5-e7e3-4328-85ce-1d6aa110cfde"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 13:33:16 crc kubenswrapper[4725]: I1014 13:33:16.948213 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-scripts" (OuterVolumeSpecName: "scripts") pod "af8fc3f5-e7e3-4328-85ce-1d6aa110cfde" (UID: "af8fc3f5-e7e3-4328-85ce-1d6aa110cfde"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:16 crc kubenswrapper[4725]: I1014 13:33:16.948308 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "af8fc3f5-e7e3-4328-85ce-1d6aa110cfde" (UID: "af8fc3f5-e7e3-4328-85ce-1d6aa110cfde"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:16 crc kubenswrapper[4725]: I1014 13:33:16.949673 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-kube-api-access-vjsqx" (OuterVolumeSpecName: "kube-api-access-vjsqx") pod "af8fc3f5-e7e3-4328-85ce-1d6aa110cfde" (UID: "af8fc3f5-e7e3-4328-85ce-1d6aa110cfde"). InnerVolumeSpecName "kube-api-access-vjsqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:33:17 crc kubenswrapper[4725]: I1014 13:33:17.000534 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af8fc3f5-e7e3-4328-85ce-1d6aa110cfde" (UID: "af8fc3f5-e7e3-4328-85ce-1d6aa110cfde"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:17 crc kubenswrapper[4725]: I1014 13:33:17.042040 4725 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:17 crc kubenswrapper[4725]: I1014 13:33:17.042072 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:17 crc kubenswrapper[4725]: I1014 13:33:17.042087 4725 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:17 crc kubenswrapper[4725]: I1014 13:33:17.042100 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjsqx\" (UniqueName: \"kubernetes.io/projected/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-kube-api-access-vjsqx\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:17 crc kubenswrapper[4725]: I1014 13:33:17.042113 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:17 crc kubenswrapper[4725]: I1014 13:33:17.051287 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-config-data" (OuterVolumeSpecName: "config-data") pod "af8fc3f5-e7e3-4328-85ce-1d6aa110cfde" (UID: "af8fc3f5-e7e3-4328-85ce-1d6aa110cfde"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:17 crc kubenswrapper[4725]: I1014 13:33:17.143528 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:17 crc kubenswrapper[4725]: I1014 13:33:17.691687 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 14 13:33:17 crc kubenswrapper[4725]: I1014 13:33:17.728877 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 13:33:17 crc kubenswrapper[4725]: I1014 13:33:17.751241 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 13:33:17 crc kubenswrapper[4725]: I1014 13:33:17.767379 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 13:33:17 crc kubenswrapper[4725]: E1014 13:33:17.767819 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="276146f9-c73f-4f38-b4b2-bba0a70221fe" containerName="init" Oct 14 13:33:17 crc kubenswrapper[4725]: I1014 13:33:17.767838 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="276146f9-c73f-4f38-b4b2-bba0a70221fe" containerName="init" Oct 14 13:33:17 crc kubenswrapper[4725]: E1014 13:33:17.767855 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8fc3f5-e7e3-4328-85ce-1d6aa110cfde" containerName="probe" Oct 14 13:33:17 crc kubenswrapper[4725]: I1014 13:33:17.767863 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8fc3f5-e7e3-4328-85ce-1d6aa110cfde" containerName="probe" Oct 14 13:33:17 crc kubenswrapper[4725]: E1014 13:33:17.767880 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="276146f9-c73f-4f38-b4b2-bba0a70221fe" containerName="dnsmasq-dns" Oct 14 13:33:17 crc kubenswrapper[4725]: I1014 13:33:17.767891 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="276146f9-c73f-4f38-b4b2-bba0a70221fe" containerName="dnsmasq-dns" Oct 14 13:33:17 crc kubenswrapper[4725]: E1014 13:33:17.767907 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8fc3f5-e7e3-4328-85ce-1d6aa110cfde" containerName="cinder-scheduler" Oct 14 13:33:17 crc kubenswrapper[4725]: I1014 13:33:17.767914 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8fc3f5-e7e3-4328-85ce-1d6aa110cfde" containerName="cinder-scheduler" Oct 14 13:33:17 crc kubenswrapper[4725]: I1014 13:33:17.768105 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="af8fc3f5-e7e3-4328-85ce-1d6aa110cfde" containerName="cinder-scheduler" Oct 14 13:33:17 crc kubenswrapper[4725]: I1014 13:33:17.768125 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="af8fc3f5-e7e3-4328-85ce-1d6aa110cfde" containerName="probe" Oct 14 13:33:17 crc kubenswrapper[4725]: I1014 13:33:17.768136 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="276146f9-c73f-4f38-b4b2-bba0a70221fe" containerName="dnsmasq-dns" Oct 14 13:33:17 crc kubenswrapper[4725]: I1014 13:33:17.769319 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 14 13:33:17 crc kubenswrapper[4725]: I1014 13:33:17.772069 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 14 13:33:17 crc kubenswrapper[4725]: I1014 13:33:17.777113 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 13:33:17 crc kubenswrapper[4725]: I1014 13:33:17.940192 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af8fc3f5-e7e3-4328-85ce-1d6aa110cfde" path="/var/lib/kubelet/pods/af8fc3f5-e7e3-4328-85ce-1d6aa110cfde/volumes" Oct 14 13:33:17 crc kubenswrapper[4725]: I1014 13:33:17.956920 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e61091e2-13f1-418b-b9ea-0900f8cd786b-scripts\") pod \"cinder-scheduler-0\" (UID: \"e61091e2-13f1-418b-b9ea-0900f8cd786b\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:17 crc kubenswrapper[4725]: I1014 13:33:17.956973 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e61091e2-13f1-418b-b9ea-0900f8cd786b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e61091e2-13f1-418b-b9ea-0900f8cd786b\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:17 crc kubenswrapper[4725]: I1014 13:33:17.957116 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e61091e2-13f1-418b-b9ea-0900f8cd786b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e61091e2-13f1-418b-b9ea-0900f8cd786b\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:17 crc kubenswrapper[4725]: I1014 13:33:17.957240 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e61091e2-13f1-418b-b9ea-0900f8cd786b-config-data\") pod \"cinder-scheduler-0\" (UID: \"e61091e2-13f1-418b-b9ea-0900f8cd786b\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:17 crc kubenswrapper[4725]: I1014 13:33:17.957337 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e61091e2-13f1-418b-b9ea-0900f8cd786b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e61091e2-13f1-418b-b9ea-0900f8cd786b\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:17 crc kubenswrapper[4725]: I1014 13:33:17.957557 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrjvr\" (UniqueName: \"kubernetes.io/projected/e61091e2-13f1-418b-b9ea-0900f8cd786b-kube-api-access-zrjvr\") pod \"cinder-scheduler-0\" (UID: \"e61091e2-13f1-418b-b9ea-0900f8cd786b\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:18 crc kubenswrapper[4725]: I1014 13:33:18.059358 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e61091e2-13f1-418b-b9ea-0900f8cd786b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e61091e2-13f1-418b-b9ea-0900f8cd786b\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:18 crc kubenswrapper[4725]: I1014 13:33:18.059514 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e61091e2-13f1-418b-b9ea-0900f8cd786b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e61091e2-13f1-418b-b9ea-0900f8cd786b\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:18 crc kubenswrapper[4725]: I1014 13:33:18.059525 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrjvr\" (UniqueName: \"kubernetes.io/projected/e61091e2-13f1-418b-b9ea-0900f8cd786b-kube-api-access-zrjvr\") pod \"cinder-scheduler-0\" (UID: \"e61091e2-13f1-418b-b9ea-0900f8cd786b\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:18 crc kubenswrapper[4725]: I1014 13:33:18.059782 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e61091e2-13f1-418b-b9ea-0900f8cd786b-scripts\") pod \"cinder-scheduler-0\" (UID: \"e61091e2-13f1-418b-b9ea-0900f8cd786b\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:18 crc kubenswrapper[4725]: I1014 13:33:18.059817 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e61091e2-13f1-418b-b9ea-0900f8cd786b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e61091e2-13f1-418b-b9ea-0900f8cd786b\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:18 crc kubenswrapper[4725]: I1014 13:33:18.059890 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e61091e2-13f1-418b-b9ea-0900f8cd786b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e61091e2-13f1-418b-b9ea-0900f8cd786b\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:18 crc kubenswrapper[4725]: I1014 13:33:18.059989 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e61091e2-13f1-418b-b9ea-0900f8cd786b-config-data\") pod \"cinder-scheduler-0\" (UID: \"e61091e2-13f1-418b-b9ea-0900f8cd786b\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:18 crc kubenswrapper[4725]: I1014 13:33:18.065434 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e61091e2-13f1-418b-b9ea-0900f8cd786b-scripts\") pod \"cinder-scheduler-0\" (UID: \"e61091e2-13f1-418b-b9ea-0900f8cd786b\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:18 crc kubenswrapper[4725]: I1014 13:33:18.065868 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e61091e2-13f1-418b-b9ea-0900f8cd786b-config-data\") pod \"cinder-scheduler-0\" (UID: \"e61091e2-13f1-418b-b9ea-0900f8cd786b\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:18 crc kubenswrapper[4725]: I1014 13:33:18.066342 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e61091e2-13f1-418b-b9ea-0900f8cd786b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e61091e2-13f1-418b-b9ea-0900f8cd786b\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:18 crc kubenswrapper[4725]: I1014 13:33:18.077057 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e61091e2-13f1-418b-b9ea-0900f8cd786b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e61091e2-13f1-418b-b9ea-0900f8cd786b\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:18 crc kubenswrapper[4725]: I1014 13:33:18.078291 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrjvr\" (UniqueName: \"kubernetes.io/projected/e61091e2-13f1-418b-b9ea-0900f8cd786b-kube-api-access-zrjvr\") pod \"cinder-scheduler-0\" (UID: \"e61091e2-13f1-418b-b9ea-0900f8cd786b\") " pod="openstack/cinder-scheduler-0" Oct 14 13:33:18 crc kubenswrapper[4725]: I1014 13:33:18.097360 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 14 13:33:18 crc kubenswrapper[4725]: I1014 13:33:18.599777 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 14 13:33:18 crc kubenswrapper[4725]: I1014 13:33:18.705577 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e61091e2-13f1-418b-b9ea-0900f8cd786b","Type":"ContainerStarted","Data":"8674f2e003c1c3e999723567e59e42024f67bbaa94ecb9de9933a4fa043afa09"} Oct 14 13:33:18 crc kubenswrapper[4725]: I1014 13:33:18.718014 4725 generic.go:334] "Generic (PLEG): container finished" podID="551500de-77a9-4f28-ab4b-b8259e04804b" containerID="2f06fee29194370b1ad8593cc12b848a47dce9aefe7a38963cecbdc075b21a71" exitCode=0 Oct 14 13:33:18 crc kubenswrapper[4725]: I1014 13:33:18.718074 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d5c84b44-vssnm" event={"ID":"551500de-77a9-4f28-ab4b-b8259e04804b","Type":"ContainerDied","Data":"2f06fee29194370b1ad8593cc12b848a47dce9aefe7a38963cecbdc075b21a71"} Oct 14 13:33:18 crc kubenswrapper[4725]: I1014 13:33:18.864418 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Oct 14 13:33:19 crc kubenswrapper[4725]: I1014 13:33:19.449526 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cb9f8bc48-bkgjx" Oct 14 13:33:19 crc kubenswrapper[4725]: I1014 13:33:19.586902 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/caca4422-132e-491b-8070-f4e9c4c8ff3a-httpd-config\") pod \"caca4422-132e-491b-8070-f4e9c4c8ff3a\" (UID: \"caca4422-132e-491b-8070-f4e9c4c8ff3a\") " Oct 14 13:33:19 crc kubenswrapper[4725]: I1014 13:33:19.587067 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/caca4422-132e-491b-8070-f4e9c4c8ff3a-config\") pod \"caca4422-132e-491b-8070-f4e9c4c8ff3a\" (UID: \"caca4422-132e-491b-8070-f4e9c4c8ff3a\") " Oct 14 13:33:19 crc kubenswrapper[4725]: I1014 13:33:19.587178 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/caca4422-132e-491b-8070-f4e9c4c8ff3a-ovndb-tls-certs\") pod \"caca4422-132e-491b-8070-f4e9c4c8ff3a\" (UID: \"caca4422-132e-491b-8070-f4e9c4c8ff3a\") " Oct 14 13:33:19 crc kubenswrapper[4725]: I1014 13:33:19.587202 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caca4422-132e-491b-8070-f4e9c4c8ff3a-combined-ca-bundle\") pod \"caca4422-132e-491b-8070-f4e9c4c8ff3a\" (UID: \"caca4422-132e-491b-8070-f4e9c4c8ff3a\") " Oct 14 13:33:19 crc kubenswrapper[4725]: I1014 13:33:19.587240 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxdmt\" (UniqueName: \"kubernetes.io/projected/caca4422-132e-491b-8070-f4e9c4c8ff3a-kube-api-access-rxdmt\") pod \"caca4422-132e-491b-8070-f4e9c4c8ff3a\" (UID: \"caca4422-132e-491b-8070-f4e9c4c8ff3a\") " Oct 14 13:33:19 crc kubenswrapper[4725]: I1014 13:33:19.594645 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caca4422-132e-491b-8070-f4e9c4c8ff3a-kube-api-access-rxdmt" (OuterVolumeSpecName: "kube-api-access-rxdmt") pod "caca4422-132e-491b-8070-f4e9c4c8ff3a" (UID: "caca4422-132e-491b-8070-f4e9c4c8ff3a"). InnerVolumeSpecName "kube-api-access-rxdmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:33:19 crc kubenswrapper[4725]: I1014 13:33:19.607468 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caca4422-132e-491b-8070-f4e9c4c8ff3a-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "caca4422-132e-491b-8070-f4e9c4c8ff3a" (UID: "caca4422-132e-491b-8070-f4e9c4c8ff3a"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:19 crc kubenswrapper[4725]: I1014 13:33:19.647023 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caca4422-132e-491b-8070-f4e9c4c8ff3a-config" (OuterVolumeSpecName: "config") pod "caca4422-132e-491b-8070-f4e9c4c8ff3a" (UID: "caca4422-132e-491b-8070-f4e9c4c8ff3a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:19 crc kubenswrapper[4725]: I1014 13:33:19.678564 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caca4422-132e-491b-8070-f4e9c4c8ff3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "caca4422-132e-491b-8070-f4e9c4c8ff3a" (UID: "caca4422-132e-491b-8070-f4e9c4c8ff3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:19 crc kubenswrapper[4725]: I1014 13:33:19.684382 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caca4422-132e-491b-8070-f4e9c4c8ff3a-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "caca4422-132e-491b-8070-f4e9c4c8ff3a" (UID: "caca4422-132e-491b-8070-f4e9c4c8ff3a"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:19 crc kubenswrapper[4725]: I1014 13:33:19.690253 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/caca4422-132e-491b-8070-f4e9c4c8ff3a-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:19 crc kubenswrapper[4725]: I1014 13:33:19.690593 4725 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/caca4422-132e-491b-8070-f4e9c4c8ff3a-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:19 crc kubenswrapper[4725]: I1014 13:33:19.690702 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caca4422-132e-491b-8070-f4e9c4c8ff3a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:19 crc kubenswrapper[4725]: I1014 13:33:19.690787 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxdmt\" (UniqueName: \"kubernetes.io/projected/caca4422-132e-491b-8070-f4e9c4c8ff3a-kube-api-access-rxdmt\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:19 crc kubenswrapper[4725]: I1014 13:33:19.690877 4725 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/caca4422-132e-491b-8070-f4e9c4c8ff3a-httpd-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:19 crc kubenswrapper[4725]: I1014 13:33:19.714220 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-9d5c84b44-vssnm" podUID="551500de-77a9-4f28-ab4b-b8259e04804b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Oct 14 13:33:19 crc kubenswrapper[4725]: I1014 13:33:19.729326 4725 generic.go:334] "Generic (PLEG): container finished" podID="caca4422-132e-491b-8070-f4e9c4c8ff3a" containerID="0b237005908550424cbfafee9219b4674ac4fe85010dc7948b6fa9cd6ebb7209" exitCode=0 Oct 14 13:33:19 crc kubenswrapper[4725]: I1014 13:33:19.729408 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cb9f8bc48-bkgjx" Oct 14 13:33:19 crc kubenswrapper[4725]: I1014 13:33:19.729420 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cb9f8bc48-bkgjx" event={"ID":"caca4422-132e-491b-8070-f4e9c4c8ff3a","Type":"ContainerDied","Data":"0b237005908550424cbfafee9219b4674ac4fe85010dc7948b6fa9cd6ebb7209"} Oct 14 13:33:19 crc kubenswrapper[4725]: I1014 13:33:19.729493 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cb9f8bc48-bkgjx" event={"ID":"caca4422-132e-491b-8070-f4e9c4c8ff3a","Type":"ContainerDied","Data":"a2d804e6be8a443f7b31e99a08d19393024a3aadccb49aba5c6d98df71b0f58b"} Oct 14 13:33:19 crc kubenswrapper[4725]: I1014 13:33:19.729523 4725 scope.go:117] "RemoveContainer" containerID="7f97d1174c63c7d8688086c1940ffcbb232b5708659c5ff5d87effa0e47016b1" Oct 14 13:33:19 crc kubenswrapper[4725]: I1014 13:33:19.733307 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e61091e2-13f1-418b-b9ea-0900f8cd786b","Type":"ContainerStarted","Data":"d8b6875fbd87e1e88c2f763f325f811eee54f17af3aa1a8e1cc8d68bfd20a5df"} Oct 14 13:33:19 crc kubenswrapper[4725]: I1014 13:33:19.812574 4725 scope.go:117] "RemoveContainer" containerID="0b237005908550424cbfafee9219b4674ac4fe85010dc7948b6fa9cd6ebb7209" Oct 14 13:33:19 crc kubenswrapper[4725]: I1014 13:33:19.817811 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5cb9f8bc48-bkgjx"] Oct 14 13:33:19 crc kubenswrapper[4725]: I1014 13:33:19.826172 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5cb9f8bc48-bkgjx"] Oct 14 13:33:19 crc kubenswrapper[4725]: I1014 13:33:19.845184 4725 scope.go:117] "RemoveContainer" containerID="7f97d1174c63c7d8688086c1940ffcbb232b5708659c5ff5d87effa0e47016b1" Oct 14 13:33:19 crc kubenswrapper[4725]: E1014 13:33:19.845700 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f97d1174c63c7d8688086c1940ffcbb232b5708659c5ff5d87effa0e47016b1\": container with ID starting with 7f97d1174c63c7d8688086c1940ffcbb232b5708659c5ff5d87effa0e47016b1 not found: ID does not exist" containerID="7f97d1174c63c7d8688086c1940ffcbb232b5708659c5ff5d87effa0e47016b1" Oct 14 13:33:19 crc kubenswrapper[4725]: I1014 13:33:19.845737 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f97d1174c63c7d8688086c1940ffcbb232b5708659c5ff5d87effa0e47016b1"} err="failed to get container status \"7f97d1174c63c7d8688086c1940ffcbb232b5708659c5ff5d87effa0e47016b1\": rpc error: code = NotFound desc = could not find container \"7f97d1174c63c7d8688086c1940ffcbb232b5708659c5ff5d87effa0e47016b1\": container with ID starting with 7f97d1174c63c7d8688086c1940ffcbb232b5708659c5ff5d87effa0e47016b1 not found: ID does not exist" Oct 14 13:33:19 crc kubenswrapper[4725]: I1014 13:33:19.845763 4725 scope.go:117] "RemoveContainer" containerID="0b237005908550424cbfafee9219b4674ac4fe85010dc7948b6fa9cd6ebb7209" Oct 14 13:33:19 crc kubenswrapper[4725]: E1014 13:33:19.846194 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b237005908550424cbfafee9219b4674ac4fe85010dc7948b6fa9cd6ebb7209\": container with ID starting with 0b237005908550424cbfafee9219b4674ac4fe85010dc7948b6fa9cd6ebb7209 not found: ID does not exist" containerID="0b237005908550424cbfafee9219b4674ac4fe85010dc7948b6fa9cd6ebb7209" Oct 14 13:33:19 crc kubenswrapper[4725]: I1014 13:33:19.846224 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b237005908550424cbfafee9219b4674ac4fe85010dc7948b6fa9cd6ebb7209"} err="failed to get container status \"0b237005908550424cbfafee9219b4674ac4fe85010dc7948b6fa9cd6ebb7209\": rpc error: code = NotFound desc = could not find container \"0b237005908550424cbfafee9219b4674ac4fe85010dc7948b6fa9cd6ebb7209\": container with ID starting with 0b237005908550424cbfafee9219b4674ac4fe85010dc7948b6fa9cd6ebb7209 not found: ID does not exist" Oct 14 13:33:19 crc kubenswrapper[4725]: I1014 13:33:19.890766 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b8b4d8db-n47v4" Oct 14 13:33:19 crc kubenswrapper[4725]: I1014 13:33:19.932357 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caca4422-132e-491b-8070-f4e9c4c8ff3a" path="/var/lib/kubelet/pods/caca4422-132e-491b-8070-f4e9c4c8ff3a/volumes" Oct 14 13:33:19 crc kubenswrapper[4725]: I1014 13:33:19.995496 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b8b4d8db-n47v4" Oct 14 13:33:20 crc kubenswrapper[4725]: I1014 13:33:20.097104 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-55fd7554f8-x8glb"] Oct 14 13:33:20 crc kubenswrapper[4725]: I1014 13:33:20.097382 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-55fd7554f8-x8glb" podUID="f3bff483-b10b-41fc-bf73-a04a44c16a23" containerName="barbican-api-log" containerID="cri-o://5b2fb61fd73b5e64f2359e05af6265b082d5d264ac480e6a37d56c1a7f4d0161" gracePeriod=30 Oct 14 13:33:20 crc kubenswrapper[4725]: I1014 13:33:20.098736 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-55fd7554f8-x8glb" podUID="f3bff483-b10b-41fc-bf73-a04a44c16a23" containerName="barbican-api" containerID="cri-o://55548febbe0b53a217df6cf21d68aba9a789888eac39f23434d61707961d5945" gracePeriod=30 Oct 14 13:33:20 crc kubenswrapper[4725]: I1014 13:33:20.743075 4725 generic.go:334] "Generic (PLEG): container finished" podID="f3bff483-b10b-41fc-bf73-a04a44c16a23" containerID="5b2fb61fd73b5e64f2359e05af6265b082d5d264ac480e6a37d56c1a7f4d0161" exitCode=143 Oct 14 13:33:20 crc kubenswrapper[4725]: I1014 13:33:20.743147 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55fd7554f8-x8glb" event={"ID":"f3bff483-b10b-41fc-bf73-a04a44c16a23","Type":"ContainerDied","Data":"5b2fb61fd73b5e64f2359e05af6265b082d5d264ac480e6a37d56c1a7f4d0161"} Oct 14 13:33:20 crc kubenswrapper[4725]: I1014 13:33:20.746254 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e61091e2-13f1-418b-b9ea-0900f8cd786b","Type":"ContainerStarted","Data":"05431c869b86f6cf476596c96f4fa7dbca5b59d6ff609c3e8c07b99d6d309071"} Oct 14 13:33:20 crc kubenswrapper[4725]: I1014 13:33:20.765737 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.765719554 podStartE2EDuration="3.765719554s" podCreationTimestamp="2025-10-14 13:33:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:33:20.764784559 +0000 UTC m=+1117.613219368" watchObservedRunningTime="2025-10-14 13:33:20.765719554 +0000 UTC m=+1117.614154363" Oct 14 13:33:22 crc kubenswrapper[4725]: I1014 13:33:22.193420 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-974b44687-rnvdw" Oct 14 13:33:23 crc kubenswrapper[4725]: I1014 13:33:23.098765 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 14 13:33:23 crc kubenswrapper[4725]: I1014 13:33:23.275042 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-55fd7554f8-x8glb" podUID="f3bff483-b10b-41fc-bf73-a04a44c16a23" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": read tcp 10.217.0.2:38234->10.217.0.161:9311: read: connection reset by peer" Oct 14 13:33:23 crc kubenswrapper[4725]: I1014 13:33:23.276743 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-55fd7554f8-x8glb" podUID="f3bff483-b10b-41fc-bf73-a04a44c16a23" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": read tcp 10.217.0.2:38238->10.217.0.161:9311: read: connection reset by peer" Oct 14 13:33:23 crc kubenswrapper[4725]: I1014 13:33:23.675729 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55fd7554f8-x8glb" Oct 14 13:33:23 crc kubenswrapper[4725]: I1014 13:33:23.776224 4725 generic.go:334] "Generic (PLEG): container finished" podID="f3bff483-b10b-41fc-bf73-a04a44c16a23" containerID="55548febbe0b53a217df6cf21d68aba9a789888eac39f23434d61707961d5945" exitCode=0 Oct 14 13:33:23 crc kubenswrapper[4725]: I1014 13:33:23.776277 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55fd7554f8-x8glb" event={"ID":"f3bff483-b10b-41fc-bf73-a04a44c16a23","Type":"ContainerDied","Data":"55548febbe0b53a217df6cf21d68aba9a789888eac39f23434d61707961d5945"} Oct 14 13:33:23 crc kubenswrapper[4725]: I1014 13:33:23.776305 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-55fd7554f8-x8glb" event={"ID":"f3bff483-b10b-41fc-bf73-a04a44c16a23","Type":"ContainerDied","Data":"e66d6169e67248f6be5550a0a537e08f2357c7cd3343e60141ecba41fc641f47"} Oct 14 13:33:23 crc kubenswrapper[4725]: I1014 13:33:23.776322 4725 scope.go:117] "RemoveContainer" containerID="55548febbe0b53a217df6cf21d68aba9a789888eac39f23434d61707961d5945" Oct 14 13:33:23 crc kubenswrapper[4725]: I1014 13:33:23.776317 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-55fd7554f8-x8glb" Oct 14 13:33:23 crc kubenswrapper[4725]: I1014 13:33:23.801563 4725 scope.go:117] "RemoveContainer" containerID="5b2fb61fd73b5e64f2359e05af6265b082d5d264ac480e6a37d56c1a7f4d0161" Oct 14 13:33:23 crc kubenswrapper[4725]: I1014 13:33:23.821411 4725 scope.go:117] "RemoveContainer" containerID="55548febbe0b53a217df6cf21d68aba9a789888eac39f23434d61707961d5945" Oct 14 13:33:23 crc kubenswrapper[4725]: E1014 13:33:23.822252 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55548febbe0b53a217df6cf21d68aba9a789888eac39f23434d61707961d5945\": container with ID starting with 55548febbe0b53a217df6cf21d68aba9a789888eac39f23434d61707961d5945 not found: ID does not exist" containerID="55548febbe0b53a217df6cf21d68aba9a789888eac39f23434d61707961d5945" Oct 14 13:33:23 crc kubenswrapper[4725]: I1014 13:33:23.822293 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55548febbe0b53a217df6cf21d68aba9a789888eac39f23434d61707961d5945"} err="failed to get container status \"55548febbe0b53a217df6cf21d68aba9a789888eac39f23434d61707961d5945\": rpc error: code = NotFound desc = could not find container \"55548febbe0b53a217df6cf21d68aba9a789888eac39f23434d61707961d5945\": container with ID starting with 55548febbe0b53a217df6cf21d68aba9a789888eac39f23434d61707961d5945 not found: ID does not exist" Oct 14 13:33:23 crc kubenswrapper[4725]: I1014 13:33:23.822336 4725 scope.go:117] "RemoveContainer" containerID="5b2fb61fd73b5e64f2359e05af6265b082d5d264ac480e6a37d56c1a7f4d0161" Oct 14 13:33:23 crc kubenswrapper[4725]: E1014 13:33:23.822629 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b2fb61fd73b5e64f2359e05af6265b082d5d264ac480e6a37d56c1a7f4d0161\": container with ID starting with 5b2fb61fd73b5e64f2359e05af6265b082d5d264ac480e6a37d56c1a7f4d0161 not found: ID does not exist" containerID="5b2fb61fd73b5e64f2359e05af6265b082d5d264ac480e6a37d56c1a7f4d0161" Oct 14 13:33:23 crc kubenswrapper[4725]: I1014 13:33:23.822666 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b2fb61fd73b5e64f2359e05af6265b082d5d264ac480e6a37d56c1a7f4d0161"} err="failed to get container status \"5b2fb61fd73b5e64f2359e05af6265b082d5d264ac480e6a37d56c1a7f4d0161\": rpc error: code = NotFound desc = could not find container \"5b2fb61fd73b5e64f2359e05af6265b082d5d264ac480e6a37d56c1a7f4d0161\": container with ID starting with 5b2fb61fd73b5e64f2359e05af6265b082d5d264ac480e6a37d56c1a7f4d0161 not found: ID does not exist" Oct 14 13:33:23 crc kubenswrapper[4725]: I1014 13:33:23.874325 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3bff483-b10b-41fc-bf73-a04a44c16a23-config-data-custom\") pod \"f3bff483-b10b-41fc-bf73-a04a44c16a23\" (UID: \"f3bff483-b10b-41fc-bf73-a04a44c16a23\") " Oct 14 13:33:23 crc kubenswrapper[4725]: I1014 13:33:23.874899 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3bff483-b10b-41fc-bf73-a04a44c16a23-logs\") pod \"f3bff483-b10b-41fc-bf73-a04a44c16a23\" (UID: \"f3bff483-b10b-41fc-bf73-a04a44c16a23\") " Oct 14 13:33:23 crc kubenswrapper[4725]: I1014 13:33:23.875030 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk6wf\" (UniqueName: \"kubernetes.io/projected/f3bff483-b10b-41fc-bf73-a04a44c16a23-kube-api-access-zk6wf\") pod \"f3bff483-b10b-41fc-bf73-a04a44c16a23\" (UID: \"f3bff483-b10b-41fc-bf73-a04a44c16a23\") " Oct 14 13:33:23 crc kubenswrapper[4725]: I1014 13:33:23.875289 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3bff483-b10b-41fc-bf73-a04a44c16a23-combined-ca-bundle\") pod \"f3bff483-b10b-41fc-bf73-a04a44c16a23\" (UID: \"f3bff483-b10b-41fc-bf73-a04a44c16a23\") " Oct 14 13:33:23 crc kubenswrapper[4725]: I1014 13:33:23.875403 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3bff483-b10b-41fc-bf73-a04a44c16a23-config-data\") pod \"f3bff483-b10b-41fc-bf73-a04a44c16a23\" (UID: \"f3bff483-b10b-41fc-bf73-a04a44c16a23\") " Oct 14 13:33:23 crc kubenswrapper[4725]: I1014 13:33:23.875302 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3bff483-b10b-41fc-bf73-a04a44c16a23-logs" (OuterVolumeSpecName: "logs") pod "f3bff483-b10b-41fc-bf73-a04a44c16a23" (UID: "f3bff483-b10b-41fc-bf73-a04a44c16a23"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:33:23 crc kubenswrapper[4725]: I1014 13:33:23.876085 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3bff483-b10b-41fc-bf73-a04a44c16a23-logs\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:23 crc kubenswrapper[4725]: I1014 13:33:23.880072 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3bff483-b10b-41fc-bf73-a04a44c16a23-kube-api-access-zk6wf" (OuterVolumeSpecName: "kube-api-access-zk6wf") pod "f3bff483-b10b-41fc-bf73-a04a44c16a23" (UID: "f3bff483-b10b-41fc-bf73-a04a44c16a23"). InnerVolumeSpecName "kube-api-access-zk6wf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:33:23 crc kubenswrapper[4725]: I1014 13:33:23.907687 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3bff483-b10b-41fc-bf73-a04a44c16a23-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f3bff483-b10b-41fc-bf73-a04a44c16a23" (UID: "f3bff483-b10b-41fc-bf73-a04a44c16a23"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:23 crc kubenswrapper[4725]: I1014 13:33:23.912378 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3bff483-b10b-41fc-bf73-a04a44c16a23-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3bff483-b10b-41fc-bf73-a04a44c16a23" (UID: "f3bff483-b10b-41fc-bf73-a04a44c16a23"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:23 crc kubenswrapper[4725]: I1014 13:33:23.942315 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3bff483-b10b-41fc-bf73-a04a44c16a23-config-data" (OuterVolumeSpecName: "config-data") pod "f3bff483-b10b-41fc-bf73-a04a44c16a23" (UID: "f3bff483-b10b-41fc-bf73-a04a44c16a23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:23 crc kubenswrapper[4725]: I1014 13:33:23.978218 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3bff483-b10b-41fc-bf73-a04a44c16a23-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:23 crc kubenswrapper[4725]: I1014 13:33:23.978263 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3bff483-b10b-41fc-bf73-a04a44c16a23-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:23 crc kubenswrapper[4725]: I1014 13:33:23.978274 4725 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3bff483-b10b-41fc-bf73-a04a44c16a23-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:23 crc kubenswrapper[4725]: I1014 13:33:23.978284 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zk6wf\" (UniqueName: \"kubernetes.io/projected/f3bff483-b10b-41fc-bf73-a04a44c16a23-kube-api-access-zk6wf\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.103547 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-55fd7554f8-x8glb"] Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.117368 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-55fd7554f8-x8glb"] Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.458190 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 14 13:33:24 crc kubenswrapper[4725]: E1014 13:33:24.458573 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3bff483-b10b-41fc-bf73-a04a44c16a23" containerName="barbican-api-log" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.458585 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3bff483-b10b-41fc-bf73-a04a44c16a23" containerName="barbican-api-log" Oct 14 13:33:24 crc kubenswrapper[4725]: E1014 13:33:24.458592 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3bff483-b10b-41fc-bf73-a04a44c16a23" containerName="barbican-api" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.458598 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3bff483-b10b-41fc-bf73-a04a44c16a23" containerName="barbican-api" Oct 14 13:33:24 crc kubenswrapper[4725]: E1014 13:33:24.458615 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caca4422-132e-491b-8070-f4e9c4c8ff3a" containerName="neutron-api" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.458621 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="caca4422-132e-491b-8070-f4e9c4c8ff3a" containerName="neutron-api" Oct 14 13:33:24 crc kubenswrapper[4725]: E1014 13:33:24.458635 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caca4422-132e-491b-8070-f4e9c4c8ff3a" containerName="neutron-httpd" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.458640 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="caca4422-132e-491b-8070-f4e9c4c8ff3a" containerName="neutron-httpd" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.458820 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3bff483-b10b-41fc-bf73-a04a44c16a23" containerName="barbican-api-log" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.458837 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="caca4422-132e-491b-8070-f4e9c4c8ff3a" containerName="neutron-httpd" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.458847 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3bff483-b10b-41fc-bf73-a04a44c16a23" containerName="barbican-api" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.458858 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="caca4422-132e-491b-8070-f4e9c4c8ff3a" containerName="neutron-api" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.459392 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.463686 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.463890 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.464264 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-72j42" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.477484 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.595419 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/45ebb0aa-8b3e-400e-a9dc-3075c19b873c-openstack-config-secret\") pod \"openstackclient\" (UID: \"45ebb0aa-8b3e-400e-a9dc-3075c19b873c\") " pod="openstack/openstackclient" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.595605 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/45ebb0aa-8b3e-400e-a9dc-3075c19b873c-openstack-config\") pod \"openstackclient\" (UID: \"45ebb0aa-8b3e-400e-a9dc-3075c19b873c\") " pod="openstack/openstackclient" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.595715 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45ebb0aa-8b3e-400e-a9dc-3075c19b873c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"45ebb0aa-8b3e-400e-a9dc-3075c19b873c\") " pod="openstack/openstackclient" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.595755 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b68rr\" (UniqueName: \"kubernetes.io/projected/45ebb0aa-8b3e-400e-a9dc-3075c19b873c-kube-api-access-b68rr\") pod \"openstackclient\" (UID: \"45ebb0aa-8b3e-400e-a9dc-3075c19b873c\") " pod="openstack/openstackclient" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.697041 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45ebb0aa-8b3e-400e-a9dc-3075c19b873c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"45ebb0aa-8b3e-400e-a9dc-3075c19b873c\") " pod="openstack/openstackclient" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.697105 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b68rr\" (UniqueName: \"kubernetes.io/projected/45ebb0aa-8b3e-400e-a9dc-3075c19b873c-kube-api-access-b68rr\") pod \"openstackclient\" (UID: \"45ebb0aa-8b3e-400e-a9dc-3075c19b873c\") " pod="openstack/openstackclient" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.697190 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/45ebb0aa-8b3e-400e-a9dc-3075c19b873c-openstack-config-secret\") pod \"openstackclient\" (UID: \"45ebb0aa-8b3e-400e-a9dc-3075c19b873c\") " pod="openstack/openstackclient" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.697262 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/45ebb0aa-8b3e-400e-a9dc-3075c19b873c-openstack-config\") pod \"openstackclient\" (UID: \"45ebb0aa-8b3e-400e-a9dc-3075c19b873c\") " pod="openstack/openstackclient" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.698383 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/45ebb0aa-8b3e-400e-a9dc-3075c19b873c-openstack-config\") pod \"openstackclient\" (UID: \"45ebb0aa-8b3e-400e-a9dc-3075c19b873c\") " pod="openstack/openstackclient" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.703346 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/45ebb0aa-8b3e-400e-a9dc-3075c19b873c-openstack-config-secret\") pod \"openstackclient\" (UID: \"45ebb0aa-8b3e-400e-a9dc-3075c19b873c\") " pod="openstack/openstackclient" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.705183 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45ebb0aa-8b3e-400e-a9dc-3075c19b873c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"45ebb0aa-8b3e-400e-a9dc-3075c19b873c\") " pod="openstack/openstackclient" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.720833 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Oct 14 13:33:24 crc kubenswrapper[4725]: E1014 13:33:24.721678 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-b68rr], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="45ebb0aa-8b3e-400e-a9dc-3075c19b873c" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.730406 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b68rr\" (UniqueName: \"kubernetes.io/projected/45ebb0aa-8b3e-400e-a9dc-3075c19b873c-kube-api-access-b68rr\") pod \"openstackclient\" (UID: \"45ebb0aa-8b3e-400e-a9dc-3075c19b873c\") " pod="openstack/openstackclient" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.744410 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.769720 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.771003 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.806587 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.814864 4725 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="45ebb0aa-8b3e-400e-a9dc-3075c19b873c" podUID="3e432c2c-86bc-4b07-81dd-a98be5ad1ca9" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.873961 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.904275 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3e432c2c-86bc-4b07-81dd-a98be5ad1ca9-openstack-config-secret\") pod \"openstackclient\" (UID: \"3e432c2c-86bc-4b07-81dd-a98be5ad1ca9\") " pod="openstack/openstackclient" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.904414 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3e432c2c-86bc-4b07-81dd-a98be5ad1ca9-openstack-config\") pod \"openstackclient\" (UID: \"3e432c2c-86bc-4b07-81dd-a98be5ad1ca9\") " pod="openstack/openstackclient" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.904474 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e432c2c-86bc-4b07-81dd-a98be5ad1ca9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3e432c2c-86bc-4b07-81dd-a98be5ad1ca9\") " pod="openstack/openstackclient" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.904512 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snsdn\" (UniqueName: \"kubernetes.io/projected/3e432c2c-86bc-4b07-81dd-a98be5ad1ca9-kube-api-access-snsdn\") pod \"openstackclient\" (UID: \"3e432c2c-86bc-4b07-81dd-a98be5ad1ca9\") " pod="openstack/openstackclient" Oct 14 13:33:24 crc kubenswrapper[4725]: I1014 13:33:24.962302 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 14 13:33:25 crc kubenswrapper[4725]: I1014 13:33:25.006090 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e432c2c-86bc-4b07-81dd-a98be5ad1ca9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3e432c2c-86bc-4b07-81dd-a98be5ad1ca9\") " pod="openstack/openstackclient" Oct 14 13:33:25 crc kubenswrapper[4725]: I1014 13:33:25.006143 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snsdn\" (UniqueName: \"kubernetes.io/projected/3e432c2c-86bc-4b07-81dd-a98be5ad1ca9-kube-api-access-snsdn\") pod \"openstackclient\" (UID: \"3e432c2c-86bc-4b07-81dd-a98be5ad1ca9\") " pod="openstack/openstackclient" Oct 14 13:33:25 crc kubenswrapper[4725]: I1014 13:33:25.006227 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3e432c2c-86bc-4b07-81dd-a98be5ad1ca9-openstack-config-secret\") pod \"openstackclient\" (UID: \"3e432c2c-86bc-4b07-81dd-a98be5ad1ca9\") " pod="openstack/openstackclient" Oct 14 13:33:25 crc kubenswrapper[4725]: I1014 13:33:25.006318 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3e432c2c-86bc-4b07-81dd-a98be5ad1ca9-openstack-config\") pod \"openstackclient\" (UID: \"3e432c2c-86bc-4b07-81dd-a98be5ad1ca9\") " pod="openstack/openstackclient" Oct 14 13:33:25 crc kubenswrapper[4725]: I1014 13:33:25.007043 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3e432c2c-86bc-4b07-81dd-a98be5ad1ca9-openstack-config\") pod \"openstackclient\" (UID: \"3e432c2c-86bc-4b07-81dd-a98be5ad1ca9\") " pod="openstack/openstackclient" Oct 14 13:33:25 crc kubenswrapper[4725]: I1014 13:33:25.011052 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e432c2c-86bc-4b07-81dd-a98be5ad1ca9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3e432c2c-86bc-4b07-81dd-a98be5ad1ca9\") " pod="openstack/openstackclient" Oct 14 13:33:25 crc kubenswrapper[4725]: I1014 13:33:25.011091 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3e432c2c-86bc-4b07-81dd-a98be5ad1ca9-openstack-config-secret\") pod \"openstackclient\" (UID: \"3e432c2c-86bc-4b07-81dd-a98be5ad1ca9\") " pod="openstack/openstackclient" Oct 14 13:33:25 crc kubenswrapper[4725]: I1014 13:33:25.028649 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snsdn\" (UniqueName: \"kubernetes.io/projected/3e432c2c-86bc-4b07-81dd-a98be5ad1ca9-kube-api-access-snsdn\") pod \"openstackclient\" (UID: \"3e432c2c-86bc-4b07-81dd-a98be5ad1ca9\") " pod="openstack/openstackclient" Oct 14 13:33:25 crc kubenswrapper[4725]: I1014 13:33:25.107561 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b68rr\" (UniqueName: \"kubernetes.io/projected/45ebb0aa-8b3e-400e-a9dc-3075c19b873c-kube-api-access-b68rr\") pod \"45ebb0aa-8b3e-400e-a9dc-3075c19b873c\" (UID: \"45ebb0aa-8b3e-400e-a9dc-3075c19b873c\") " Oct 14 13:33:25 crc kubenswrapper[4725]: I1014 13:33:25.107621 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/45ebb0aa-8b3e-400e-a9dc-3075c19b873c-openstack-config-secret\") pod \"45ebb0aa-8b3e-400e-a9dc-3075c19b873c\" (UID: \"45ebb0aa-8b3e-400e-a9dc-3075c19b873c\") " Oct 14 13:33:25 crc kubenswrapper[4725]: I1014 13:33:25.107670 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45ebb0aa-8b3e-400e-a9dc-3075c19b873c-combined-ca-bundle\") pod \"45ebb0aa-8b3e-400e-a9dc-3075c19b873c\" (UID: \"45ebb0aa-8b3e-400e-a9dc-3075c19b873c\") " Oct 14 13:33:25 crc kubenswrapper[4725]: I1014 13:33:25.107702 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/45ebb0aa-8b3e-400e-a9dc-3075c19b873c-openstack-config\") pod \"45ebb0aa-8b3e-400e-a9dc-3075c19b873c\" (UID: \"45ebb0aa-8b3e-400e-a9dc-3075c19b873c\") " Oct 14 13:33:25 crc kubenswrapper[4725]: I1014 13:33:25.108511 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45ebb0aa-8b3e-400e-a9dc-3075c19b873c-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "45ebb0aa-8b3e-400e-a9dc-3075c19b873c" (UID: "45ebb0aa-8b3e-400e-a9dc-3075c19b873c"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:33:25 crc kubenswrapper[4725]: I1014 13:33:25.110533 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45ebb0aa-8b3e-400e-a9dc-3075c19b873c-kube-api-access-b68rr" (OuterVolumeSpecName: "kube-api-access-b68rr") pod "45ebb0aa-8b3e-400e-a9dc-3075c19b873c" (UID: "45ebb0aa-8b3e-400e-a9dc-3075c19b873c"). InnerVolumeSpecName "kube-api-access-b68rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:33:25 crc kubenswrapper[4725]: I1014 13:33:25.111184 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45ebb0aa-8b3e-400e-a9dc-3075c19b873c-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "45ebb0aa-8b3e-400e-a9dc-3075c19b873c" (UID: "45ebb0aa-8b3e-400e-a9dc-3075c19b873c"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:25 crc kubenswrapper[4725]: I1014 13:33:25.111656 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45ebb0aa-8b3e-400e-a9dc-3075c19b873c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45ebb0aa-8b3e-400e-a9dc-3075c19b873c" (UID: "45ebb0aa-8b3e-400e-a9dc-3075c19b873c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:25 crc kubenswrapper[4725]: I1014 13:33:25.209846 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45ebb0aa-8b3e-400e-a9dc-3075c19b873c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:25 crc kubenswrapper[4725]: I1014 13:33:25.209885 4725 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/45ebb0aa-8b3e-400e-a9dc-3075c19b873c-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:25 crc kubenswrapper[4725]: I1014 13:33:25.209895 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b68rr\" (UniqueName: \"kubernetes.io/projected/45ebb0aa-8b3e-400e-a9dc-3075c19b873c-kube-api-access-b68rr\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:25 crc kubenswrapper[4725]: I1014 13:33:25.209906 4725 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/45ebb0aa-8b3e-400e-a9dc-3075c19b873c-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:25 crc kubenswrapper[4725]: I1014 13:33:25.247989 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 14 13:33:25 crc kubenswrapper[4725]: I1014 13:33:25.716243 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 14 13:33:25 crc kubenswrapper[4725]: I1014 13:33:25.815701 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 14 13:33:25 crc kubenswrapper[4725]: I1014 13:33:25.815699 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"3e432c2c-86bc-4b07-81dd-a98be5ad1ca9","Type":"ContainerStarted","Data":"60305596eb0bcf68dd033972df6819e596d3cf920c8bd04865e646d19633f397"} Oct 14 13:33:25 crc kubenswrapper[4725]: I1014 13:33:25.820092 4725 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="45ebb0aa-8b3e-400e-a9dc-3075c19b873c" podUID="3e432c2c-86bc-4b07-81dd-a98be5ad1ca9" Oct 14 13:33:25 crc kubenswrapper[4725]: I1014 13:33:25.933101 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45ebb0aa-8b3e-400e-a9dc-3075c19b873c" path="/var/lib/kubelet/pods/45ebb0aa-8b3e-400e-a9dc-3075c19b873c/volumes" Oct 14 13:33:25 crc kubenswrapper[4725]: I1014 13:33:25.933706 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3bff483-b10b-41fc-bf73-a04a44c16a23" path="/var/lib/kubelet/pods/f3bff483-b10b-41fc-bf73-a04a44c16a23/volumes" Oct 14 13:33:28 crc kubenswrapper[4725]: I1014 13:33:28.513733 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.234499 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.234771 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="49af2f07-59b3-4a96-ba8b-f100b3960827" containerName="ceilometer-central-agent" containerID="cri-o://d63c41cd88f500c01fe387c0ed9f01130d0069e0512e105bc63a28d3aa88bfa4" gracePeriod=30 Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.234881 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="49af2f07-59b3-4a96-ba8b-f100b3960827" containerName="ceilometer-notification-agent" containerID="cri-o://b1a749b85fa4505ee589e3ea59842a461366e82595a3889dc65b50c43e289bad" gracePeriod=30 Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.234897 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="49af2f07-59b3-4a96-ba8b-f100b3960827" containerName="sg-core" containerID="cri-o://752db4880d002ef25970c98ed7b1733548fcc79d7452da56af27fb04dbdd1ea7" gracePeriod=30 Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.235052 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="49af2f07-59b3-4a96-ba8b-f100b3960827" containerName="proxy-httpd" containerID="cri-o://ac1bef92ccd0fb887f55a931fa9763db4401c5dbc00689035ce73b613f593637" gracePeriod=30 Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.242505 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.678250 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-749ff78757-kdtzj"] Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.680480 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-749ff78757-kdtzj" Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.683262 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.684162 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.684381 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.710445 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-749ff78757-kdtzj"] Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.713545 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-9d5c84b44-vssnm" podUID="551500de-77a9-4f28-ab4b-b8259e04804b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.805074 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77d143f2-1e54-4c47-a06b-90136098179d-run-httpd\") pod \"swift-proxy-749ff78757-kdtzj\" (UID: \"77d143f2-1e54-4c47-a06b-90136098179d\") " pod="openstack/swift-proxy-749ff78757-kdtzj" Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.805618 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/77d143f2-1e54-4c47-a06b-90136098179d-etc-swift\") pod \"swift-proxy-749ff78757-kdtzj\" (UID: \"77d143f2-1e54-4c47-a06b-90136098179d\") " pod="openstack/swift-proxy-749ff78757-kdtzj" Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.805736 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77d143f2-1e54-4c47-a06b-90136098179d-combined-ca-bundle\") pod \"swift-proxy-749ff78757-kdtzj\" (UID: \"77d143f2-1e54-4c47-a06b-90136098179d\") " pod="openstack/swift-proxy-749ff78757-kdtzj" Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.805772 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77d143f2-1e54-4c47-a06b-90136098179d-internal-tls-certs\") pod \"swift-proxy-749ff78757-kdtzj\" (UID: \"77d143f2-1e54-4c47-a06b-90136098179d\") " pod="openstack/swift-proxy-749ff78757-kdtzj" Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.805941 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77d143f2-1e54-4c47-a06b-90136098179d-public-tls-certs\") pod \"swift-proxy-749ff78757-kdtzj\" (UID: \"77d143f2-1e54-4c47-a06b-90136098179d\") " pod="openstack/swift-proxy-749ff78757-kdtzj" Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.805976 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77d143f2-1e54-4c47-a06b-90136098179d-config-data\") pod \"swift-proxy-749ff78757-kdtzj\" (UID: \"77d143f2-1e54-4c47-a06b-90136098179d\") " pod="openstack/swift-proxy-749ff78757-kdtzj" Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.806065 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd295\" (UniqueName: \"kubernetes.io/projected/77d143f2-1e54-4c47-a06b-90136098179d-kube-api-access-rd295\") pod \"swift-proxy-749ff78757-kdtzj\" (UID: \"77d143f2-1e54-4c47-a06b-90136098179d\") " pod="openstack/swift-proxy-749ff78757-kdtzj" Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.806188 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77d143f2-1e54-4c47-a06b-90136098179d-log-httpd\") pod \"swift-proxy-749ff78757-kdtzj\" (UID: \"77d143f2-1e54-4c47-a06b-90136098179d\") " pod="openstack/swift-proxy-749ff78757-kdtzj" Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.879961 4725 generic.go:334] "Generic (PLEG): container finished" podID="49af2f07-59b3-4a96-ba8b-f100b3960827" containerID="ac1bef92ccd0fb887f55a931fa9763db4401c5dbc00689035ce73b613f593637" exitCode=0 Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.880001 4725 generic.go:334] "Generic (PLEG): container finished" podID="49af2f07-59b3-4a96-ba8b-f100b3960827" containerID="752db4880d002ef25970c98ed7b1733548fcc79d7452da56af27fb04dbdd1ea7" exitCode=2 Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.880010 4725 generic.go:334] "Generic (PLEG): container finished" podID="49af2f07-59b3-4a96-ba8b-f100b3960827" containerID="d63c41cd88f500c01fe387c0ed9f01130d0069e0512e105bc63a28d3aa88bfa4" exitCode=0 Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.880035 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49af2f07-59b3-4a96-ba8b-f100b3960827","Type":"ContainerDied","Data":"ac1bef92ccd0fb887f55a931fa9763db4401c5dbc00689035ce73b613f593637"} Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.880059 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49af2f07-59b3-4a96-ba8b-f100b3960827","Type":"ContainerDied","Data":"752db4880d002ef25970c98ed7b1733548fcc79d7452da56af27fb04dbdd1ea7"} Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.880068 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49af2f07-59b3-4a96-ba8b-f100b3960827","Type":"ContainerDied","Data":"d63c41cd88f500c01fe387c0ed9f01130d0069e0512e105bc63a28d3aa88bfa4"} Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.907273 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77d143f2-1e54-4c47-a06b-90136098179d-log-httpd\") pod \"swift-proxy-749ff78757-kdtzj\" (UID: \"77d143f2-1e54-4c47-a06b-90136098179d\") " pod="openstack/swift-proxy-749ff78757-kdtzj" Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.907355 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77d143f2-1e54-4c47-a06b-90136098179d-run-httpd\") pod \"swift-proxy-749ff78757-kdtzj\" (UID: \"77d143f2-1e54-4c47-a06b-90136098179d\") " pod="openstack/swift-proxy-749ff78757-kdtzj" Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.907398 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/77d143f2-1e54-4c47-a06b-90136098179d-etc-swift\") pod \"swift-proxy-749ff78757-kdtzj\" (UID: \"77d143f2-1e54-4c47-a06b-90136098179d\") " pod="openstack/swift-proxy-749ff78757-kdtzj" Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.907439 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77d143f2-1e54-4c47-a06b-90136098179d-combined-ca-bundle\") pod \"swift-proxy-749ff78757-kdtzj\" (UID: \"77d143f2-1e54-4c47-a06b-90136098179d\") " pod="openstack/swift-proxy-749ff78757-kdtzj" Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.907570 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77d143f2-1e54-4c47-a06b-90136098179d-internal-tls-certs\") pod \"swift-proxy-749ff78757-kdtzj\" (UID: \"77d143f2-1e54-4c47-a06b-90136098179d\") " pod="openstack/swift-proxy-749ff78757-kdtzj" Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.907641 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77d143f2-1e54-4c47-a06b-90136098179d-public-tls-certs\") pod \"swift-proxy-749ff78757-kdtzj\" (UID: \"77d143f2-1e54-4c47-a06b-90136098179d\") " pod="openstack/swift-proxy-749ff78757-kdtzj" Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.907667 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77d143f2-1e54-4c47-a06b-90136098179d-config-data\") pod \"swift-proxy-749ff78757-kdtzj\" (UID: \"77d143f2-1e54-4c47-a06b-90136098179d\") " pod="openstack/swift-proxy-749ff78757-kdtzj" Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.907699 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd295\" (UniqueName: \"kubernetes.io/projected/77d143f2-1e54-4c47-a06b-90136098179d-kube-api-access-rd295\") pod \"swift-proxy-749ff78757-kdtzj\" (UID: \"77d143f2-1e54-4c47-a06b-90136098179d\") " pod="openstack/swift-proxy-749ff78757-kdtzj" Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.908832 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77d143f2-1e54-4c47-a06b-90136098179d-log-httpd\") pod \"swift-proxy-749ff78757-kdtzj\" (UID: \"77d143f2-1e54-4c47-a06b-90136098179d\") " pod="openstack/swift-proxy-749ff78757-kdtzj" Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.908957 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77d143f2-1e54-4c47-a06b-90136098179d-run-httpd\") pod \"swift-proxy-749ff78757-kdtzj\" (UID: \"77d143f2-1e54-4c47-a06b-90136098179d\") " pod="openstack/swift-proxy-749ff78757-kdtzj" Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.916790 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77d143f2-1e54-4c47-a06b-90136098179d-public-tls-certs\") pod \"swift-proxy-749ff78757-kdtzj\" (UID: \"77d143f2-1e54-4c47-a06b-90136098179d\") " pod="openstack/swift-proxy-749ff78757-kdtzj" Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.917108 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77d143f2-1e54-4c47-a06b-90136098179d-combined-ca-bundle\") pod \"swift-proxy-749ff78757-kdtzj\" (UID: \"77d143f2-1e54-4c47-a06b-90136098179d\") " pod="openstack/swift-proxy-749ff78757-kdtzj" Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.917662 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77d143f2-1e54-4c47-a06b-90136098179d-config-data\") pod \"swift-proxy-749ff78757-kdtzj\" (UID: \"77d143f2-1e54-4c47-a06b-90136098179d\") " pod="openstack/swift-proxy-749ff78757-kdtzj" Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.923050 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77d143f2-1e54-4c47-a06b-90136098179d-internal-tls-certs\") pod \"swift-proxy-749ff78757-kdtzj\" (UID: \"77d143f2-1e54-4c47-a06b-90136098179d\") " pod="openstack/swift-proxy-749ff78757-kdtzj" Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.923723 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/77d143f2-1e54-4c47-a06b-90136098179d-etc-swift\") pod \"swift-proxy-749ff78757-kdtzj\" (UID: \"77d143f2-1e54-4c47-a06b-90136098179d\") " pod="openstack/swift-proxy-749ff78757-kdtzj" Oct 14 13:33:29 crc kubenswrapper[4725]: I1014 13:33:29.938350 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd295\" (UniqueName: \"kubernetes.io/projected/77d143f2-1e54-4c47-a06b-90136098179d-kube-api-access-rd295\") pod \"swift-proxy-749ff78757-kdtzj\" (UID: \"77d143f2-1e54-4c47-a06b-90136098179d\") " pod="openstack/swift-proxy-749ff78757-kdtzj" Oct 14 13:33:30 crc kubenswrapper[4725]: I1014 13:33:30.061027 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-749ff78757-kdtzj" Oct 14 13:33:31 crc kubenswrapper[4725]: I1014 13:33:31.583912 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 13:33:31 crc kubenswrapper[4725]: I1014 13:33:31.584471 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5d89387e-949e-49c8-b6a1-543aaa1a02d5" containerName="glance-log" containerID="cri-o://246420b5ad57920b0bb890f04077bcafa9a3e0e0138ffaedc093baaedfa039cf" gracePeriod=30 Oct 14 13:33:31 crc kubenswrapper[4725]: I1014 13:33:31.584915 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5d89387e-949e-49c8-b6a1-543aaa1a02d5" containerName="glance-httpd" containerID="cri-o://90a394f171260f06a0c197fccb5ae0a94b79b2b8a61fa80072a9328aee753ccb" gracePeriod=30 Oct 14 13:33:31 crc kubenswrapper[4725]: I1014 13:33:31.913693 4725 generic.go:334] "Generic (PLEG): container finished" podID="5d89387e-949e-49c8-b6a1-543aaa1a02d5" containerID="246420b5ad57920b0bb890f04077bcafa9a3e0e0138ffaedc093baaedfa039cf" exitCode=143 Oct 14 13:33:31 crc kubenswrapper[4725]: I1014 13:33:31.913732 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5d89387e-949e-49c8-b6a1-543aaa1a02d5","Type":"ContainerDied","Data":"246420b5ad57920b0bb890f04077bcafa9a3e0e0138ffaedc093baaedfa039cf"} Oct 14 13:33:32 crc kubenswrapper[4725]: I1014 13:33:32.489429 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 13:33:32 crc kubenswrapper[4725]: I1014 13:33:32.489691 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7756dc76-bc61-4bab-a662-649e33ffc929" containerName="glance-log" containerID="cri-o://692eff8fb6d388e0baa3e13d498259c71a7e5574109db0a7eb814434936bbaa6" gracePeriod=30 Oct 14 13:33:32 crc kubenswrapper[4725]: I1014 13:33:32.489818 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7756dc76-bc61-4bab-a662-649e33ffc929" containerName="glance-httpd" containerID="cri-o://41a729d606fa02f4367514aee139728508c4f67c66425f2a882f5938d0001735" gracePeriod=30 Oct 14 13:33:32 crc kubenswrapper[4725]: I1014 13:33:32.520208 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 13:33:32 crc kubenswrapper[4725]: I1014 13:33:32.520282 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 13:33:32 crc kubenswrapper[4725]: I1014 13:33:32.925749 4725 generic.go:334] "Generic (PLEG): container finished" podID="7756dc76-bc61-4bab-a662-649e33ffc929" containerID="692eff8fb6d388e0baa3e13d498259c71a7e5574109db0a7eb814434936bbaa6" exitCode=143 Oct 14 13:33:32 crc kubenswrapper[4725]: I1014 13:33:32.925842 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7756dc76-bc61-4bab-a662-649e33ffc929","Type":"ContainerDied","Data":"692eff8fb6d388e0baa3e13d498259c71a7e5574109db0a7eb814434936bbaa6"} Oct 14 13:33:32 crc kubenswrapper[4725]: I1014 13:33:32.931972 4725 generic.go:334] "Generic (PLEG): container finished" podID="49af2f07-59b3-4a96-ba8b-f100b3960827" containerID="b1a749b85fa4505ee589e3ea59842a461366e82595a3889dc65b50c43e289bad" exitCode=0 Oct 14 13:33:32 crc kubenswrapper[4725]: I1014 13:33:32.932012 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49af2f07-59b3-4a96-ba8b-f100b3960827","Type":"ContainerDied","Data":"b1a749b85fa4505ee589e3ea59842a461366e82595a3889dc65b50c43e289bad"} Oct 14 13:33:34 crc kubenswrapper[4725]: I1014 13:33:34.603338 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-42h4h"] Oct 14 13:33:34 crc kubenswrapper[4725]: I1014 13:33:34.605996 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-42h4h" Oct 14 13:33:34 crc kubenswrapper[4725]: I1014 13:33:34.618852 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-42h4h"] Oct 14 13:33:34 crc kubenswrapper[4725]: I1014 13:33:34.691171 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-lp5lp"] Oct 14 13:33:34 crc kubenswrapper[4725]: I1014 13:33:34.692268 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lp5lp" Oct 14 13:33:34 crc kubenswrapper[4725]: I1014 13:33:34.704649 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl4jk\" (UniqueName: \"kubernetes.io/projected/6bbd0716-da3f-4287-9ede-533325e1b42e-kube-api-access-rl4jk\") pod \"nova-api-db-create-42h4h\" (UID: \"6bbd0716-da3f-4287-9ede-533325e1b42e\") " pod="openstack/nova-api-db-create-42h4h" Oct 14 13:33:34 crc kubenswrapper[4725]: I1014 13:33:34.714986 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-lp5lp"] Oct 14 13:33:34 crc kubenswrapper[4725]: I1014 13:33:34.806547 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4z7n\" (UniqueName: \"kubernetes.io/projected/97761d43-e004-4dd0-9648-9ef68fbf7d18-kube-api-access-d4z7n\") pod \"nova-cell0-db-create-lp5lp\" (UID: \"97761d43-e004-4dd0-9648-9ef68fbf7d18\") " pod="openstack/nova-cell0-db-create-lp5lp" Oct 14 13:33:34 crc kubenswrapper[4725]: I1014 13:33:34.806765 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl4jk\" (UniqueName: \"kubernetes.io/projected/6bbd0716-da3f-4287-9ede-533325e1b42e-kube-api-access-rl4jk\") pod \"nova-api-db-create-42h4h\" (UID: \"6bbd0716-da3f-4287-9ede-533325e1b42e\") " pod="openstack/nova-api-db-create-42h4h" Oct 14 13:33:34 crc kubenswrapper[4725]: I1014 13:33:34.813803 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-4vdpc"] Oct 14 13:33:34 crc kubenswrapper[4725]: I1014 13:33:34.814989 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4vdpc" Oct 14 13:33:34 crc kubenswrapper[4725]: I1014 13:33:34.821381 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-4vdpc"] Oct 14 13:33:34 crc kubenswrapper[4725]: I1014 13:33:34.839031 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl4jk\" (UniqueName: \"kubernetes.io/projected/6bbd0716-da3f-4287-9ede-533325e1b42e-kube-api-access-rl4jk\") pod \"nova-api-db-create-42h4h\" (UID: \"6bbd0716-da3f-4287-9ede-533325e1b42e\") " pod="openstack/nova-api-db-create-42h4h" Oct 14 13:33:34 crc kubenswrapper[4725]: I1014 13:33:34.908276 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4z7n\" (UniqueName: \"kubernetes.io/projected/97761d43-e004-4dd0-9648-9ef68fbf7d18-kube-api-access-d4z7n\") pod \"nova-cell0-db-create-lp5lp\" (UID: \"97761d43-e004-4dd0-9648-9ef68fbf7d18\") " pod="openstack/nova-cell0-db-create-lp5lp" Oct 14 13:33:34 crc kubenswrapper[4725]: I1014 13:33:34.923191 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4z7n\" (UniqueName: \"kubernetes.io/projected/97761d43-e004-4dd0-9648-9ef68fbf7d18-kube-api-access-d4z7n\") pod \"nova-cell0-db-create-lp5lp\" (UID: \"97761d43-e004-4dd0-9648-9ef68fbf7d18\") " pod="openstack/nova-cell0-db-create-lp5lp" Oct 14 13:33:34 crc kubenswrapper[4725]: I1014 13:33:34.925968 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-42h4h" Oct 14 13:33:34 crc kubenswrapper[4725]: I1014 13:33:34.956730 4725 generic.go:334] "Generic (PLEG): container finished" podID="5d89387e-949e-49c8-b6a1-543aaa1a02d5" containerID="90a394f171260f06a0c197fccb5ae0a94b79b2b8a61fa80072a9328aee753ccb" exitCode=0 Oct 14 13:33:34 crc kubenswrapper[4725]: I1014 13:33:34.956778 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5d89387e-949e-49c8-b6a1-543aaa1a02d5","Type":"ContainerDied","Data":"90a394f171260f06a0c197fccb5ae0a94b79b2b8a61fa80072a9328aee753ccb"} Oct 14 13:33:35 crc kubenswrapper[4725]: I1014 13:33:35.009974 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96ng6\" (UniqueName: \"kubernetes.io/projected/bacf7fe9-2e8f-4909-9761-e73e78ec0008-kube-api-access-96ng6\") pod \"nova-cell1-db-create-4vdpc\" (UID: \"bacf7fe9-2e8f-4909-9761-e73e78ec0008\") " pod="openstack/nova-cell1-db-create-4vdpc" Oct 14 13:33:35 crc kubenswrapper[4725]: I1014 13:33:35.010375 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lp5lp" Oct 14 13:33:35 crc kubenswrapper[4725]: I1014 13:33:35.111431 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96ng6\" (UniqueName: \"kubernetes.io/projected/bacf7fe9-2e8f-4909-9761-e73e78ec0008-kube-api-access-96ng6\") pod \"nova-cell1-db-create-4vdpc\" (UID: \"bacf7fe9-2e8f-4909-9761-e73e78ec0008\") " pod="openstack/nova-cell1-db-create-4vdpc" Oct 14 13:33:35 crc kubenswrapper[4725]: I1014 13:33:35.134246 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96ng6\" (UniqueName: \"kubernetes.io/projected/bacf7fe9-2e8f-4909-9761-e73e78ec0008-kube-api-access-96ng6\") pod \"nova-cell1-db-create-4vdpc\" (UID: \"bacf7fe9-2e8f-4909-9761-e73e78ec0008\") " pod="openstack/nova-cell1-db-create-4vdpc" Oct 14 13:33:35 crc kubenswrapper[4725]: I1014 13:33:35.431495 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4vdpc" Oct 14 13:33:35 crc kubenswrapper[4725]: I1014 13:33:35.969745 4725 generic.go:334] "Generic (PLEG): container finished" podID="7756dc76-bc61-4bab-a662-649e33ffc929" containerID="41a729d606fa02f4367514aee139728508c4f67c66425f2a882f5938d0001735" exitCode=0 Oct 14 13:33:35 crc kubenswrapper[4725]: I1014 13:33:35.969852 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7756dc76-bc61-4bab-a662-649e33ffc929","Type":"ContainerDied","Data":"41a729d606fa02f4367514aee139728508c4f67c66425f2a882f5938d0001735"} Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.268283 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.452515 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49af2f07-59b3-4a96-ba8b-f100b3960827-scripts\") pod \"49af2f07-59b3-4a96-ba8b-f100b3960827\" (UID: \"49af2f07-59b3-4a96-ba8b-f100b3960827\") " Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.452871 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49af2f07-59b3-4a96-ba8b-f100b3960827-run-httpd\") pod \"49af2f07-59b3-4a96-ba8b-f100b3960827\" (UID: \"49af2f07-59b3-4a96-ba8b-f100b3960827\") " Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.452915 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49af2f07-59b3-4a96-ba8b-f100b3960827-log-httpd\") pod \"49af2f07-59b3-4a96-ba8b-f100b3960827\" (UID: \"49af2f07-59b3-4a96-ba8b-f100b3960827\") " Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.452937 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49af2f07-59b3-4a96-ba8b-f100b3960827-config-data\") pod \"49af2f07-59b3-4a96-ba8b-f100b3960827\" (UID: \"49af2f07-59b3-4a96-ba8b-f100b3960827\") " Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.453130 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49af2f07-59b3-4a96-ba8b-f100b3960827-sg-core-conf-yaml\") pod \"49af2f07-59b3-4a96-ba8b-f100b3960827\" (UID: \"49af2f07-59b3-4a96-ba8b-f100b3960827\") " Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.453160 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49af2f07-59b3-4a96-ba8b-f100b3960827-combined-ca-bundle\") pod \"49af2f07-59b3-4a96-ba8b-f100b3960827\" (UID: \"49af2f07-59b3-4a96-ba8b-f100b3960827\") " Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.453200 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6kc5\" (UniqueName: \"kubernetes.io/projected/49af2f07-59b3-4a96-ba8b-f100b3960827-kube-api-access-p6kc5\") pod \"49af2f07-59b3-4a96-ba8b-f100b3960827\" (UID: \"49af2f07-59b3-4a96-ba8b-f100b3960827\") " Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.453917 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49af2f07-59b3-4a96-ba8b-f100b3960827-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "49af2f07-59b3-4a96-ba8b-f100b3960827" (UID: "49af2f07-59b3-4a96-ba8b-f100b3960827"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.453933 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49af2f07-59b3-4a96-ba8b-f100b3960827-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "49af2f07-59b3-4a96-ba8b-f100b3960827" (UID: "49af2f07-59b3-4a96-ba8b-f100b3960827"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.465880 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49af2f07-59b3-4a96-ba8b-f100b3960827-kube-api-access-p6kc5" (OuterVolumeSpecName: "kube-api-access-p6kc5") pod "49af2f07-59b3-4a96-ba8b-f100b3960827" (UID: "49af2f07-59b3-4a96-ba8b-f100b3960827"). InnerVolumeSpecName "kube-api-access-p6kc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.475364 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49af2f07-59b3-4a96-ba8b-f100b3960827-scripts" (OuterVolumeSpecName: "scripts") pod "49af2f07-59b3-4a96-ba8b-f100b3960827" (UID: "49af2f07-59b3-4a96-ba8b-f100b3960827"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.556154 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6kc5\" (UniqueName: \"kubernetes.io/projected/49af2f07-59b3-4a96-ba8b-f100b3960827-kube-api-access-p6kc5\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.556183 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49af2f07-59b3-4a96-ba8b-f100b3960827-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.556193 4725 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49af2f07-59b3-4a96-ba8b-f100b3960827-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.556202 4725 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49af2f07-59b3-4a96-ba8b-f100b3960827-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.650630 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49af2f07-59b3-4a96-ba8b-f100b3960827-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "49af2f07-59b3-4a96-ba8b-f100b3960827" (UID: "49af2f07-59b3-4a96-ba8b-f100b3960827"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.657761 4725 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49af2f07-59b3-4a96-ba8b-f100b3960827-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.662308 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.680028 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49af2f07-59b3-4a96-ba8b-f100b3960827-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49af2f07-59b3-4a96-ba8b-f100b3960827" (UID: "49af2f07-59b3-4a96-ba8b-f100b3960827"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.717777 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49af2f07-59b3-4a96-ba8b-f100b3960827-config-data" (OuterVolumeSpecName: "config-data") pod "49af2f07-59b3-4a96-ba8b-f100b3960827" (UID: "49af2f07-59b3-4a96-ba8b-f100b3960827"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.759782 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49af2f07-59b3-4a96-ba8b-f100b3960827-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.759822 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49af2f07-59b3-4a96-ba8b-f100b3960827-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.861037 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prls4\" (UniqueName: \"kubernetes.io/projected/7756dc76-bc61-4bab-a662-649e33ffc929-kube-api-access-prls4\") pod \"7756dc76-bc61-4bab-a662-649e33ffc929\" (UID: \"7756dc76-bc61-4bab-a662-649e33ffc929\") " Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.861130 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7756dc76-bc61-4bab-a662-649e33ffc929-combined-ca-bundle\") pod \"7756dc76-bc61-4bab-a662-649e33ffc929\" (UID: \"7756dc76-bc61-4bab-a662-649e33ffc929\") " Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.861199 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7756dc76-bc61-4bab-a662-649e33ffc929-config-data\") pod \"7756dc76-bc61-4bab-a662-649e33ffc929\" (UID: \"7756dc76-bc61-4bab-a662-649e33ffc929\") " Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.861217 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7756dc76-bc61-4bab-a662-649e33ffc929-scripts\") pod \"7756dc76-bc61-4bab-a662-649e33ffc929\" (UID: \"7756dc76-bc61-4bab-a662-649e33ffc929\") " Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.861236 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7756dc76-bc61-4bab-a662-649e33ffc929-logs\") pod \"7756dc76-bc61-4bab-a662-649e33ffc929\" (UID: \"7756dc76-bc61-4bab-a662-649e33ffc929\") " Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.861319 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7756dc76-bc61-4bab-a662-649e33ffc929-httpd-run\") pod \"7756dc76-bc61-4bab-a662-649e33ffc929\" (UID: \"7756dc76-bc61-4bab-a662-649e33ffc929\") " Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.861341 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7756dc76-bc61-4bab-a662-649e33ffc929-internal-tls-certs\") pod \"7756dc76-bc61-4bab-a662-649e33ffc929\" (UID: \"7756dc76-bc61-4bab-a662-649e33ffc929\") " Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.861434 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"7756dc76-bc61-4bab-a662-649e33ffc929\" (UID: \"7756dc76-bc61-4bab-a662-649e33ffc929\") " Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.862281 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7756dc76-bc61-4bab-a662-649e33ffc929-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7756dc76-bc61-4bab-a662-649e33ffc929" (UID: "7756dc76-bc61-4bab-a662-649e33ffc929"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.862547 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7756dc76-bc61-4bab-a662-649e33ffc929-logs" (OuterVolumeSpecName: "logs") pod "7756dc76-bc61-4bab-a662-649e33ffc929" (UID: "7756dc76-bc61-4bab-a662-649e33ffc929"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.865981 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "7756dc76-bc61-4bab-a662-649e33ffc929" (UID: "7756dc76-bc61-4bab-a662-649e33ffc929"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.867006 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7756dc76-bc61-4bab-a662-649e33ffc929-kube-api-access-prls4" (OuterVolumeSpecName: "kube-api-access-prls4") pod "7756dc76-bc61-4bab-a662-649e33ffc929" (UID: "7756dc76-bc61-4bab-a662-649e33ffc929"). InnerVolumeSpecName "kube-api-access-prls4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.900591 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7756dc76-bc61-4bab-a662-649e33ffc929-scripts" (OuterVolumeSpecName: "scripts") pod "7756dc76-bc61-4bab-a662-649e33ffc929" (UID: "7756dc76-bc61-4bab-a662-649e33ffc929"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.908865 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7756dc76-bc61-4bab-a662-649e33ffc929-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7756dc76-bc61-4bab-a662-649e33ffc929" (UID: "7756dc76-bc61-4bab-a662-649e33ffc929"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.926627 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.942838 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7756dc76-bc61-4bab-a662-649e33ffc929-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7756dc76-bc61-4bab-a662-649e33ffc929" (UID: "7756dc76-bc61-4bab-a662-649e33ffc929"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.944810 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7756dc76-bc61-4bab-a662-649e33ffc929-config-data" (OuterVolumeSpecName: "config-data") pod "7756dc76-bc61-4bab-a662-649e33ffc929" (UID: "7756dc76-bc61-4bab-a662-649e33ffc929"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.963429 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7756dc76-bc61-4bab-a662-649e33ffc929-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.963479 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7756dc76-bc61-4bab-a662-649e33ffc929-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.963488 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7756dc76-bc61-4bab-a662-649e33ffc929-logs\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.963497 4725 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7756dc76-bc61-4bab-a662-649e33ffc929-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.963505 4725 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7756dc76-bc61-4bab-a662-649e33ffc929-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.963533 4725 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.963544 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prls4\" (UniqueName: \"kubernetes.io/projected/7756dc76-bc61-4bab-a662-649e33ffc929-kube-api-access-prls4\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.963553 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7756dc76-bc61-4bab-a662-649e33ffc929-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:36 crc kubenswrapper[4725]: I1014 13:33:36.999134 4725 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:36.999724 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49af2f07-59b3-4a96-ba8b-f100b3960827","Type":"ContainerDied","Data":"b7db0573102e2e7483921b0aee891111358056e031f84cafc90d330a9fa179c2"} Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:36.999797 4725 scope.go:117] "RemoveContainer" containerID="ac1bef92ccd0fb887f55a931fa9763db4401c5dbc00689035ce73b613f593637" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.000009 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.008008 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"3e432c2c-86bc-4b07-81dd-a98be5ad1ca9","Type":"ContainerStarted","Data":"d0811f76cf26076e7f154002393d2e2a0421cd6ff2f2ea88a16a2ecb2d2ceaa3"} Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.040351 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5d89387e-949e-49c8-b6a1-543aaa1a02d5","Type":"ContainerDied","Data":"2021b478db781a1431816348c5d66db3476897b62c4a90d018812cee0dbf9e72"} Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.040495 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.064203 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\" (UID: \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\") " Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.064285 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d89387e-949e-49c8-b6a1-543aaa1a02d5-logs\") pod \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\" (UID: \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\") " Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.064339 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d89387e-949e-49c8-b6a1-543aaa1a02d5-scripts\") pod \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\" (UID: \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\") " Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.064401 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d89387e-949e-49c8-b6a1-543aaa1a02d5-combined-ca-bundle\") pod \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\" (UID: \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\") " Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.064467 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq5lv\" (UniqueName: \"kubernetes.io/projected/5d89387e-949e-49c8-b6a1-543aaa1a02d5-kube-api-access-zq5lv\") pod \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\" (UID: \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\") " Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.064498 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d89387e-949e-49c8-b6a1-543aaa1a02d5-public-tls-certs\") pod \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\" (UID: \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\") " Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.064552 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d89387e-949e-49c8-b6a1-543aaa1a02d5-config-data\") pod \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\" (UID: \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\") " Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.064601 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d89387e-949e-49c8-b6a1-543aaa1a02d5-httpd-run\") pod \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\" (UID: \"5d89387e-949e-49c8-b6a1-543aaa1a02d5\") " Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.064867 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7756dc76-bc61-4bab-a662-649e33ffc929","Type":"ContainerDied","Data":"cb1568b9a2a9c61856f210121bd2deed8ad2933246e855010e3f5a979991f543"} Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.065016 4725 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.066792 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.067357 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d89387e-949e-49c8-b6a1-543aaa1a02d5-logs" (OuterVolumeSpecName: "logs") pod "5d89387e-949e-49c8-b6a1-543aaa1a02d5" (UID: "5d89387e-949e-49c8-b6a1-543aaa1a02d5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.068270 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d89387e-949e-49c8-b6a1-543aaa1a02d5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5d89387e-949e-49c8-b6a1-543aaa1a02d5" (UID: "5d89387e-949e-49c8-b6a1-543aaa1a02d5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.084614 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d89387e-949e-49c8-b6a1-543aaa1a02d5-scripts" (OuterVolumeSpecName: "scripts") pod "5d89387e-949e-49c8-b6a1-543aaa1a02d5" (UID: "5d89387e-949e-49c8-b6a1-543aaa1a02d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.089414 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d89387e-949e-49c8-b6a1-543aaa1a02d5-kube-api-access-zq5lv" (OuterVolumeSpecName: "kube-api-access-zq5lv") pod "5d89387e-949e-49c8-b6a1-543aaa1a02d5" (UID: "5d89387e-949e-49c8-b6a1-543aaa1a02d5"). InnerVolumeSpecName "kube-api-access-zq5lv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.092225 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "5d89387e-949e-49c8-b6a1-543aaa1a02d5" (UID: "5d89387e-949e-49c8-b6a1-543aaa1a02d5"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.094576 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.613236249 podStartE2EDuration="13.094559813s" podCreationTimestamp="2025-10-14 13:33:24 +0000 UTC" firstStartedPulling="2025-10-14 13:33:25.736428891 +0000 UTC m=+1122.584863700" lastFinishedPulling="2025-10-14 13:33:36.217752455 +0000 UTC m=+1133.066187264" observedRunningTime="2025-10-14 13:33:37.028879797 +0000 UTC m=+1133.877314626" watchObservedRunningTime="2025-10-14 13:33:37.094559813 +0000 UTC m=+1133.942994612" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.095626 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-4vdpc"] Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.107606 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d89387e-949e-49c8-b6a1-543aaa1a02d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d89387e-949e-49c8-b6a1-543aaa1a02d5" (UID: "5d89387e-949e-49c8-b6a1-543aaa1a02d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.115126 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-lp5lp"] Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.149738 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-42h4h"] Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.172871 4725 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d89387e-949e-49c8-b6a1-543aaa1a02d5-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.172926 4725 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.172938 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d89387e-949e-49c8-b6a1-543aaa1a02d5-logs\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.172946 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d89387e-949e-49c8-b6a1-543aaa1a02d5-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.172978 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d89387e-949e-49c8-b6a1-543aaa1a02d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.172999 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zq5lv\" (UniqueName: \"kubernetes.io/projected/5d89387e-949e-49c8-b6a1-543aaa1a02d5-kube-api-access-zq5lv\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.173028 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d89387e-949e-49c8-b6a1-543aaa1a02d5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5d89387e-949e-49c8-b6a1-543aaa1a02d5" (UID: "5d89387e-949e-49c8-b6a1-543aaa1a02d5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.185585 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d89387e-949e-49c8-b6a1-543aaa1a02d5-config-data" (OuterVolumeSpecName: "config-data") pod "5d89387e-949e-49c8-b6a1-543aaa1a02d5" (UID: "5d89387e-949e-49c8-b6a1-543aaa1a02d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.195172 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-749ff78757-kdtzj"] Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.196830 4725 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.275571 4725 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d89387e-949e-49c8-b6a1-543aaa1a02d5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.275597 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d89387e-949e-49c8-b6a1-543aaa1a02d5-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.275607 4725 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.282378 4725 scope.go:117] "RemoveContainer" containerID="752db4880d002ef25970c98ed7b1733548fcc79d7452da56af27fb04dbdd1ea7" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.330199 4725 scope.go:117] "RemoveContainer" containerID="b1a749b85fa4505ee589e3ea59842a461366e82595a3889dc65b50c43e289bad" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.352798 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.362778 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.375265 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 13:33:37 crc kubenswrapper[4725]: E1014 13:33:37.375719 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49af2f07-59b3-4a96-ba8b-f100b3960827" containerName="proxy-httpd" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.375733 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="49af2f07-59b3-4a96-ba8b-f100b3960827" containerName="proxy-httpd" Oct 14 13:33:37 crc kubenswrapper[4725]: E1014 13:33:37.375750 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d89387e-949e-49c8-b6a1-543aaa1a02d5" containerName="glance-log" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.375756 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d89387e-949e-49c8-b6a1-543aaa1a02d5" containerName="glance-log" Oct 14 13:33:37 crc kubenswrapper[4725]: E1014 13:33:37.375772 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49af2f07-59b3-4a96-ba8b-f100b3960827" containerName="ceilometer-central-agent" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.375779 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="49af2f07-59b3-4a96-ba8b-f100b3960827" containerName="ceilometer-central-agent" Oct 14 13:33:37 crc kubenswrapper[4725]: E1014 13:33:37.375787 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7756dc76-bc61-4bab-a662-649e33ffc929" containerName="glance-httpd" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.375794 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="7756dc76-bc61-4bab-a662-649e33ffc929" containerName="glance-httpd" Oct 14 13:33:37 crc kubenswrapper[4725]: E1014 13:33:37.375803 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49af2f07-59b3-4a96-ba8b-f100b3960827" containerName="ceilometer-notification-agent" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.375809 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="49af2f07-59b3-4a96-ba8b-f100b3960827" containerName="ceilometer-notification-agent" Oct 14 13:33:37 crc kubenswrapper[4725]: E1014 13:33:37.375827 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7756dc76-bc61-4bab-a662-649e33ffc929" containerName="glance-log" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.375833 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="7756dc76-bc61-4bab-a662-649e33ffc929" containerName="glance-log" Oct 14 13:33:37 crc kubenswrapper[4725]: E1014 13:33:37.375842 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49af2f07-59b3-4a96-ba8b-f100b3960827" containerName="sg-core" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.375847 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="49af2f07-59b3-4a96-ba8b-f100b3960827" containerName="sg-core" Oct 14 13:33:37 crc kubenswrapper[4725]: E1014 13:33:37.375857 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d89387e-949e-49c8-b6a1-543aaa1a02d5" containerName="glance-httpd" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.375862 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d89387e-949e-49c8-b6a1-543aaa1a02d5" containerName="glance-httpd" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.376026 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="7756dc76-bc61-4bab-a662-649e33ffc929" containerName="glance-log" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.376043 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d89387e-949e-49c8-b6a1-543aaa1a02d5" containerName="glance-log" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.376057 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="7756dc76-bc61-4bab-a662-649e33ffc929" containerName="glance-httpd" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.376067 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="49af2f07-59b3-4a96-ba8b-f100b3960827" containerName="ceilometer-central-agent" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.376078 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="49af2f07-59b3-4a96-ba8b-f100b3960827" containerName="ceilometer-notification-agent" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.376091 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="49af2f07-59b3-4a96-ba8b-f100b3960827" containerName="sg-core" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.376100 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d89387e-949e-49c8-b6a1-543aaa1a02d5" containerName="glance-httpd" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.376110 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="49af2f07-59b3-4a96-ba8b-f100b3960827" containerName="proxy-httpd" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.377179 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.386930 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.390009 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-p8cwt" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.390204 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.390391 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.390522 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.401478 4725 scope.go:117] "RemoveContainer" containerID="d63c41cd88f500c01fe387c0ed9f01130d0069e0512e105bc63a28d3aa88bfa4" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.453048 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.453181 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.463356 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.471523 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.479089 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.482922 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7c1e20-3ab6-42f7-81f6-aa6444a258ec-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6c7c1e20-3ab6-42f7-81f6-aa6444a258ec\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.483491 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c7c1e20-3ab6-42f7-81f6-aa6444a258ec-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6c7c1e20-3ab6-42f7-81f6-aa6444a258ec\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.483604 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c7c1e20-3ab6-42f7-81f6-aa6444a258ec\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.483692 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c7c1e20-3ab6-42f7-81f6-aa6444a258ec-logs\") pod \"glance-default-internal-api-0\" (UID: \"6c7c1e20-3ab6-42f7-81f6-aa6444a258ec\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.483800 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c7c1e20-3ab6-42f7-81f6-aa6444a258ec-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6c7c1e20-3ab6-42f7-81f6-aa6444a258ec\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.483912 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c7c1e20-3ab6-42f7-81f6-aa6444a258ec-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6c7c1e20-3ab6-42f7-81f6-aa6444a258ec\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.484243 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdptr\" (UniqueName: \"kubernetes.io/projected/6c7c1e20-3ab6-42f7-81f6-aa6444a258ec-kube-api-access-zdptr\") pod \"glance-default-internal-api-0\" (UID: \"6c7c1e20-3ab6-42f7-81f6-aa6444a258ec\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.484377 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c7c1e20-3ab6-42f7-81f6-aa6444a258ec-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6c7c1e20-3ab6-42f7-81f6-aa6444a258ec\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.484691 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.496008 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.496232 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.522238 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.552102 4725 scope.go:117] "RemoveContainer" containerID="90a394f171260f06a0c197fccb5ae0a94b79b2b8a61fa80072a9328aee753ccb" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.581311 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.582780 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.586798 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c7c1e20-3ab6-42f7-81f6-aa6444a258ec-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6c7c1e20-3ab6-42f7-81f6-aa6444a258ec\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.586852 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c7c1e20-3ab6-42f7-81f6-aa6444a258ec-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6c7c1e20-3ab6-42f7-81f6-aa6444a258ec\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.586954 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdptr\" (UniqueName: \"kubernetes.io/projected/6c7c1e20-3ab6-42f7-81f6-aa6444a258ec-kube-api-access-zdptr\") pod \"glance-default-internal-api-0\" (UID: \"6c7c1e20-3ab6-42f7-81f6-aa6444a258ec\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.586998 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-run-httpd\") pod \"ceilometer-0\" (UID: \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\") " pod="openstack/ceilometer-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.587057 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-scripts\") pod \"ceilometer-0\" (UID: \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\") " pod="openstack/ceilometer-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.587086 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c7c1e20-3ab6-42f7-81f6-aa6444a258ec-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6c7c1e20-3ab6-42f7-81f6-aa6444a258ec\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.587119 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-config-data\") pod \"ceilometer-0\" (UID: \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\") " pod="openstack/ceilometer-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.587148 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\") " pod="openstack/ceilometer-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.587172 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7c1e20-3ab6-42f7-81f6-aa6444a258ec-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6c7c1e20-3ab6-42f7-81f6-aa6444a258ec\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.587201 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7nfz\" (UniqueName: \"kubernetes.io/projected/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-kube-api-access-b7nfz\") pod \"ceilometer-0\" (UID: \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\") " pod="openstack/ceilometer-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.587225 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-log-httpd\") pod \"ceilometer-0\" (UID: \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\") " pod="openstack/ceilometer-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.587252 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\") " pod="openstack/ceilometer-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.587275 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c7c1e20-3ab6-42f7-81f6-aa6444a258ec-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6c7c1e20-3ab6-42f7-81f6-aa6444a258ec\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.587317 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c7c1e20-3ab6-42f7-81f6-aa6444a258ec\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.587360 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c7c1e20-3ab6-42f7-81f6-aa6444a258ec-logs\") pod \"glance-default-internal-api-0\" (UID: \"6c7c1e20-3ab6-42f7-81f6-aa6444a258ec\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.587845 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.587965 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c7c1e20-3ab6-42f7-81f6-aa6444a258ec-logs\") pod \"glance-default-internal-api-0\" (UID: \"6c7c1e20-3ab6-42f7-81f6-aa6444a258ec\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.588016 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.589296 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c7c1e20-3ab6-42f7-81f6-aa6444a258ec-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6c7c1e20-3ab6-42f7-81f6-aa6444a258ec\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.592141 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c7c1e20-3ab6-42f7-81f6-aa6444a258ec\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.605000 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c7c1e20-3ab6-42f7-81f6-aa6444a258ec-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6c7c1e20-3ab6-42f7-81f6-aa6444a258ec\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.626873 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.633488 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c7c1e20-3ab6-42f7-81f6-aa6444a258ec-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6c7c1e20-3ab6-42f7-81f6-aa6444a258ec\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.659008 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c7c1e20-3ab6-42f7-81f6-aa6444a258ec-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6c7c1e20-3ab6-42f7-81f6-aa6444a258ec\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.665738 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdptr\" (UniqueName: \"kubernetes.io/projected/6c7c1e20-3ab6-42f7-81f6-aa6444a258ec-kube-api-access-zdptr\") pod \"glance-default-internal-api-0\" (UID: \"6c7c1e20-3ab6-42f7-81f6-aa6444a258ec\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.666490 4725 scope.go:117] "RemoveContainer" containerID="246420b5ad57920b0bb890f04077bcafa9a3e0e0138ffaedc093baaedfa039cf" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.667287 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7c1e20-3ab6-42f7-81f6-aa6444a258ec-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6c7c1e20-3ab6-42f7-81f6-aa6444a258ec\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.688959 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwrn2\" (UniqueName: \"kubernetes.io/projected/8afd9cea-f51f-4174-b856-90b357779daf-kube-api-access-fwrn2\") pod \"glance-default-external-api-0\" (UID: \"8afd9cea-f51f-4174-b856-90b357779daf\") " pod="openstack/glance-default-external-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.689004 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-run-httpd\") pod \"ceilometer-0\" (UID: \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\") " pod="openstack/ceilometer-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.689048 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-scripts\") pod \"ceilometer-0\" (UID: \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\") " pod="openstack/ceilometer-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.689068 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-config-data\") pod \"ceilometer-0\" (UID: \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\") " pod="openstack/ceilometer-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.689092 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\") " pod="openstack/ceilometer-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.689111 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"8afd9cea-f51f-4174-b856-90b357779daf\") " pod="openstack/glance-default-external-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.689127 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8afd9cea-f51f-4174-b856-90b357779daf-config-data\") pod \"glance-default-external-api-0\" (UID: \"8afd9cea-f51f-4174-b856-90b357779daf\") " pod="openstack/glance-default-external-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.689149 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7nfz\" (UniqueName: \"kubernetes.io/projected/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-kube-api-access-b7nfz\") pod \"ceilometer-0\" (UID: \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\") " pod="openstack/ceilometer-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.689167 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-log-httpd\") pod \"ceilometer-0\" (UID: \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\") " pod="openstack/ceilometer-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.689186 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8afd9cea-f51f-4174-b856-90b357779daf-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8afd9cea-f51f-4174-b856-90b357779daf\") " pod="openstack/glance-default-external-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.689204 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\") " pod="openstack/ceilometer-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.689220 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8afd9cea-f51f-4174-b856-90b357779daf-logs\") pod \"glance-default-external-api-0\" (UID: \"8afd9cea-f51f-4174-b856-90b357779daf\") " pod="openstack/glance-default-external-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.689236 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8afd9cea-f51f-4174-b856-90b357779daf-scripts\") pod \"glance-default-external-api-0\" (UID: \"8afd9cea-f51f-4174-b856-90b357779daf\") " pod="openstack/glance-default-external-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.689255 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8afd9cea-f51f-4174-b856-90b357779daf-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8afd9cea-f51f-4174-b856-90b357779daf\") " pod="openstack/glance-default-external-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.689323 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8afd9cea-f51f-4174-b856-90b357779daf-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8afd9cea-f51f-4174-b856-90b357779daf\") " pod="openstack/glance-default-external-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.689687 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-run-httpd\") pod \"ceilometer-0\" (UID: \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\") " pod="openstack/ceilometer-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.691814 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-log-httpd\") pod \"ceilometer-0\" (UID: \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\") " pod="openstack/ceilometer-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.692227 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-scripts\") pod \"ceilometer-0\" (UID: \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\") " pod="openstack/ceilometer-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.692517 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c7c1e20-3ab6-42f7-81f6-aa6444a258ec\") " pod="openstack/glance-default-internal-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.698181 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-config-data\") pod \"ceilometer-0\" (UID: \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\") " pod="openstack/ceilometer-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.701731 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\") " pod="openstack/ceilometer-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.701936 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\") " pod="openstack/ceilometer-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.718473 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7nfz\" (UniqueName: \"kubernetes.io/projected/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-kube-api-access-b7nfz\") pod \"ceilometer-0\" (UID: \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\") " pod="openstack/ceilometer-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.731654 4725 scope.go:117] "RemoveContainer" containerID="41a729d606fa02f4367514aee139728508c4f67c66425f2a882f5938d0001735" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.769214 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.791396 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"8afd9cea-f51f-4174-b856-90b357779daf\") " pod="openstack/glance-default-external-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.791436 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8afd9cea-f51f-4174-b856-90b357779daf-config-data\") pod \"glance-default-external-api-0\" (UID: \"8afd9cea-f51f-4174-b856-90b357779daf\") " pod="openstack/glance-default-external-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.791478 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8afd9cea-f51f-4174-b856-90b357779daf-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8afd9cea-f51f-4174-b856-90b357779daf\") " pod="openstack/glance-default-external-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.791496 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8afd9cea-f51f-4174-b856-90b357779daf-logs\") pod \"glance-default-external-api-0\" (UID: \"8afd9cea-f51f-4174-b856-90b357779daf\") " pod="openstack/glance-default-external-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.791516 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8afd9cea-f51f-4174-b856-90b357779daf-scripts\") pod \"glance-default-external-api-0\" (UID: \"8afd9cea-f51f-4174-b856-90b357779daf\") " pod="openstack/glance-default-external-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.791539 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8afd9cea-f51f-4174-b856-90b357779daf-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8afd9cea-f51f-4174-b856-90b357779daf\") " pod="openstack/glance-default-external-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.791590 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8afd9cea-f51f-4174-b856-90b357779daf-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8afd9cea-f51f-4174-b856-90b357779daf\") " pod="openstack/glance-default-external-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.791633 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwrn2\" (UniqueName: \"kubernetes.io/projected/8afd9cea-f51f-4174-b856-90b357779daf-kube-api-access-fwrn2\") pod \"glance-default-external-api-0\" (UID: \"8afd9cea-f51f-4174-b856-90b357779daf\") " pod="openstack/glance-default-external-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.792019 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8afd9cea-f51f-4174-b856-90b357779daf-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8afd9cea-f51f-4174-b856-90b357779daf\") " pod="openstack/glance-default-external-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.792181 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"8afd9cea-f51f-4174-b856-90b357779daf\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.792681 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8afd9cea-f51f-4174-b856-90b357779daf-logs\") pod \"glance-default-external-api-0\" (UID: \"8afd9cea-f51f-4174-b856-90b357779daf\") " pod="openstack/glance-default-external-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.798215 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8afd9cea-f51f-4174-b856-90b357779daf-scripts\") pod \"glance-default-external-api-0\" (UID: \"8afd9cea-f51f-4174-b856-90b357779daf\") " pod="openstack/glance-default-external-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.800156 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8afd9cea-f51f-4174-b856-90b357779daf-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8afd9cea-f51f-4174-b856-90b357779daf\") " pod="openstack/glance-default-external-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.800289 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8afd9cea-f51f-4174-b856-90b357779daf-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8afd9cea-f51f-4174-b856-90b357779daf\") " pod="openstack/glance-default-external-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.801229 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8afd9cea-f51f-4174-b856-90b357779daf-config-data\") pod \"glance-default-external-api-0\" (UID: \"8afd9cea-f51f-4174-b856-90b357779daf\") " pod="openstack/glance-default-external-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.809409 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwrn2\" (UniqueName: \"kubernetes.io/projected/8afd9cea-f51f-4174-b856-90b357779daf-kube-api-access-fwrn2\") pod \"glance-default-external-api-0\" (UID: \"8afd9cea-f51f-4174-b856-90b357779daf\") " pod="openstack/glance-default-external-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.816580 4725 scope.go:117] "RemoveContainer" containerID="692eff8fb6d388e0baa3e13d498259c71a7e5574109db0a7eb814434936bbaa6" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.829497 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"8afd9cea-f51f-4174-b856-90b357779daf\") " pod="openstack/glance-default-external-api-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.889514 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.947196 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49af2f07-59b3-4a96-ba8b-f100b3960827" path="/var/lib/kubelet/pods/49af2f07-59b3-4a96-ba8b-f100b3960827/volumes" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.948488 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d89387e-949e-49c8-b6a1-543aaa1a02d5" path="/var/lib/kubelet/pods/5d89387e-949e-49c8-b6a1-543aaa1a02d5/volumes" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.973865 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7756dc76-bc61-4bab-a662-649e33ffc929" path="/var/lib/kubelet/pods/7756dc76-bc61-4bab-a662-649e33ffc929/volumes" Oct 14 13:33:37 crc kubenswrapper[4725]: I1014 13:33:37.982756 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 14 13:33:38 crc kubenswrapper[4725]: I1014 13:33:38.133904 4725 generic.go:334] "Generic (PLEG): container finished" podID="97761d43-e004-4dd0-9648-9ef68fbf7d18" containerID="fd312f0d0c681b4221491063340d0d4031a50c89bad324a7bd357344f15051b3" exitCode=0 Oct 14 13:33:38 crc kubenswrapper[4725]: I1014 13:33:38.134750 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lp5lp" event={"ID":"97761d43-e004-4dd0-9648-9ef68fbf7d18","Type":"ContainerDied","Data":"fd312f0d0c681b4221491063340d0d4031a50c89bad324a7bd357344f15051b3"} Oct 14 13:33:38 crc kubenswrapper[4725]: I1014 13:33:38.134783 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lp5lp" event={"ID":"97761d43-e004-4dd0-9648-9ef68fbf7d18","Type":"ContainerStarted","Data":"3e09374942ea8ea11e70c53e33cac24acf8d8d3721086d7a26c71f5436f55fe7"} Oct 14 13:33:38 crc kubenswrapper[4725]: I1014 13:33:38.143399 4725 generic.go:334] "Generic (PLEG): container finished" podID="bacf7fe9-2e8f-4909-9761-e73e78ec0008" containerID="30e6b0ad0cebba6a776f105c671a35f291f4ee3ccf91698f91eb0e58bac12c3c" exitCode=0 Oct 14 13:33:38 crc kubenswrapper[4725]: I1014 13:33:38.143694 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4vdpc" event={"ID":"bacf7fe9-2e8f-4909-9761-e73e78ec0008","Type":"ContainerDied","Data":"30e6b0ad0cebba6a776f105c671a35f291f4ee3ccf91698f91eb0e58bac12c3c"} Oct 14 13:33:38 crc kubenswrapper[4725]: I1014 13:33:38.143732 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4vdpc" event={"ID":"bacf7fe9-2e8f-4909-9761-e73e78ec0008","Type":"ContainerStarted","Data":"1c6dc7eda2f33b7a91814e487e164cd9bd106eaa10179c4d30492500907bd84f"} Oct 14 13:33:38 crc kubenswrapper[4725]: I1014 13:33:38.157369 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-749ff78757-kdtzj" event={"ID":"77d143f2-1e54-4c47-a06b-90136098179d","Type":"ContainerStarted","Data":"21f633d8377c17b4f00512ab51d12497932e37622c7354956ac84d7e2bbcaccd"} Oct 14 13:33:38 crc kubenswrapper[4725]: I1014 13:33:38.157412 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-749ff78757-kdtzj" event={"ID":"77d143f2-1e54-4c47-a06b-90136098179d","Type":"ContainerStarted","Data":"e42af4d64ee0b1b70de2ae19b39fb29b0e6fa84b61d39f9ce999e94d301f10e9"} Oct 14 13:33:38 crc kubenswrapper[4725]: I1014 13:33:38.158388 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-749ff78757-kdtzj" Oct 14 13:33:38 crc kubenswrapper[4725]: I1014 13:33:38.158410 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-749ff78757-kdtzj" Oct 14 13:33:38 crc kubenswrapper[4725]: I1014 13:33:38.160341 4725 generic.go:334] "Generic (PLEG): container finished" podID="6bbd0716-da3f-4287-9ede-533325e1b42e" containerID="ba31362afcca98cb2af19bc9c3b44949a8b5c6a7647abc6180dd74e41a0046ef" exitCode=0 Oct 14 13:33:38 crc kubenswrapper[4725]: I1014 13:33:38.161014 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-42h4h" event={"ID":"6bbd0716-da3f-4287-9ede-533325e1b42e","Type":"ContainerDied","Data":"ba31362afcca98cb2af19bc9c3b44949a8b5c6a7647abc6180dd74e41a0046ef"} Oct 14 13:33:38 crc kubenswrapper[4725]: I1014 13:33:38.161033 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-42h4h" event={"ID":"6bbd0716-da3f-4287-9ede-533325e1b42e","Type":"ContainerStarted","Data":"fb5e9c32d3bdb477bc01948acc1aa38150c335721c71c486c086da398a1e7d92"} Oct 14 13:33:38 crc kubenswrapper[4725]: I1014 13:33:38.204376 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-749ff78757-kdtzj" podStartSLOduration=9.204350295 podStartE2EDuration="9.204350295s" podCreationTimestamp="2025-10-14 13:33:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:33:38.197411693 +0000 UTC m=+1135.045846512" watchObservedRunningTime="2025-10-14 13:33:38.204350295 +0000 UTC m=+1135.052785104" Oct 14 13:33:38 crc kubenswrapper[4725]: I1014 13:33:38.430391 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 14 13:33:38 crc kubenswrapper[4725]: W1014 13:33:38.434176 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c7c1e20_3ab6_42f7_81f6_aa6444a258ec.slice/crio-1acd3941cc40fe0c15af8851901f738b46cad4eb959c9113472bac1f93e4c6e0 WatchSource:0}: Error finding container 1acd3941cc40fe0c15af8851901f738b46cad4eb959c9113472bac1f93e4c6e0: Status 404 returned error can't find the container with id 1acd3941cc40fe0c15af8851901f738b46cad4eb959c9113472bac1f93e4c6e0 Oct 14 13:33:38 crc kubenswrapper[4725]: I1014 13:33:38.544769 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 13:33:38 crc kubenswrapper[4725]: I1014 13:33:38.708720 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 14 13:33:38 crc kubenswrapper[4725]: W1014 13:33:38.720765 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8afd9cea_f51f_4174_b856_90b357779daf.slice/crio-53b57014a3abc033763ffe2f22b376914771b5a04749f14c0b60fcf15bf06008 WatchSource:0}: Error finding container 53b57014a3abc033763ffe2f22b376914771b5a04749f14c0b60fcf15bf06008: Status 404 returned error can't find the container with id 53b57014a3abc033763ffe2f22b376914771b5a04749f14c0b60fcf15bf06008 Oct 14 13:33:38 crc kubenswrapper[4725]: I1014 13:33:38.920539 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 13:33:38 crc kubenswrapper[4725]: I1014 13:33:38.921085 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="b08afeea-2257-428e-be50-e2cf1b5cc67e" containerName="kube-state-metrics" containerID="cri-o://2486c7747b7a5763b53b0bb9a1d4e8ac0456911909b2c21272424b1eab3618b4" gracePeriod=30 Oct 14 13:33:39 crc kubenswrapper[4725]: I1014 13:33:39.179933 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c7c1e20-3ab6-42f7-81f6-aa6444a258ec","Type":"ContainerStarted","Data":"1acd3941cc40fe0c15af8851901f738b46cad4eb959c9113472bac1f93e4c6e0"} Oct 14 13:33:39 crc kubenswrapper[4725]: I1014 13:33:39.194604 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-749ff78757-kdtzj" event={"ID":"77d143f2-1e54-4c47-a06b-90136098179d","Type":"ContainerStarted","Data":"d501fb0b5ea804f2a56eeece1b50f8dad5a4eaedd0ee87b9205d7d727b6b14d9"} Oct 14 13:33:39 crc kubenswrapper[4725]: I1014 13:33:39.208240 4725 generic.go:334] "Generic (PLEG): container finished" podID="b08afeea-2257-428e-be50-e2cf1b5cc67e" containerID="2486c7747b7a5763b53b0bb9a1d4e8ac0456911909b2c21272424b1eab3618b4" exitCode=2 Oct 14 13:33:39 crc kubenswrapper[4725]: I1014 13:33:39.208592 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b08afeea-2257-428e-be50-e2cf1b5cc67e","Type":"ContainerDied","Data":"2486c7747b7a5763b53b0bb9a1d4e8ac0456911909b2c21272424b1eab3618b4"} Oct 14 13:33:39 crc kubenswrapper[4725]: I1014 13:33:39.211161 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8afd9cea-f51f-4174-b856-90b357779daf","Type":"ContainerStarted","Data":"53b57014a3abc033763ffe2f22b376914771b5a04749f14c0b60fcf15bf06008"} Oct 14 13:33:39 crc kubenswrapper[4725]: I1014 13:33:39.213997 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cfe2404-ed55-4965-830e-88dbdd3bb9b6","Type":"ContainerStarted","Data":"97b75ed923dff3a867ac77ea21fd04ce38d4c2cd957dadf3de28a3a5acbb9c84"} Oct 14 13:33:39 crc kubenswrapper[4725]: I1014 13:33:39.491466 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 14 13:33:39 crc kubenswrapper[4725]: I1014 13:33:39.547913 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhs8m\" (UniqueName: \"kubernetes.io/projected/b08afeea-2257-428e-be50-e2cf1b5cc67e-kube-api-access-rhs8m\") pod \"b08afeea-2257-428e-be50-e2cf1b5cc67e\" (UID: \"b08afeea-2257-428e-be50-e2cf1b5cc67e\") " Oct 14 13:33:39 crc kubenswrapper[4725]: I1014 13:33:39.567126 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b08afeea-2257-428e-be50-e2cf1b5cc67e-kube-api-access-rhs8m" (OuterVolumeSpecName: "kube-api-access-rhs8m") pod "b08afeea-2257-428e-be50-e2cf1b5cc67e" (UID: "b08afeea-2257-428e-be50-e2cf1b5cc67e"). InnerVolumeSpecName "kube-api-access-rhs8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:33:39 crc kubenswrapper[4725]: I1014 13:33:39.660288 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhs8m\" (UniqueName: \"kubernetes.io/projected/b08afeea-2257-428e-be50-e2cf1b5cc67e-kube-api-access-rhs8m\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:39 crc kubenswrapper[4725]: I1014 13:33:39.713353 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-9d5c84b44-vssnm" podUID="551500de-77a9-4f28-ab4b-b8259e04804b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Oct 14 13:33:39 crc kubenswrapper[4725]: I1014 13:33:39.713707 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-9d5c84b44-vssnm" Oct 14 13:33:39 crc kubenswrapper[4725]: I1014 13:33:39.752403 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4vdpc" Oct 14 13:33:39 crc kubenswrapper[4725]: I1014 13:33:39.861928 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-42h4h" Oct 14 13:33:39 crc kubenswrapper[4725]: I1014 13:33:39.864441 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96ng6\" (UniqueName: \"kubernetes.io/projected/bacf7fe9-2e8f-4909-9761-e73e78ec0008-kube-api-access-96ng6\") pod \"bacf7fe9-2e8f-4909-9761-e73e78ec0008\" (UID: \"bacf7fe9-2e8f-4909-9761-e73e78ec0008\") " Oct 14 13:33:39 crc kubenswrapper[4725]: I1014 13:33:39.869410 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bacf7fe9-2e8f-4909-9761-e73e78ec0008-kube-api-access-96ng6" (OuterVolumeSpecName: "kube-api-access-96ng6") pod "bacf7fe9-2e8f-4909-9761-e73e78ec0008" (UID: "bacf7fe9-2e8f-4909-9761-e73e78ec0008"). InnerVolumeSpecName "kube-api-access-96ng6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:33:39 crc kubenswrapper[4725]: I1014 13:33:39.902860 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lp5lp" Oct 14 13:33:39 crc kubenswrapper[4725]: I1014 13:33:39.966297 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl4jk\" (UniqueName: \"kubernetes.io/projected/6bbd0716-da3f-4287-9ede-533325e1b42e-kube-api-access-rl4jk\") pod \"6bbd0716-da3f-4287-9ede-533325e1b42e\" (UID: \"6bbd0716-da3f-4287-9ede-533325e1b42e\") " Oct 14 13:33:39 crc kubenswrapper[4725]: I1014 13:33:39.966664 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4z7n\" (UniqueName: \"kubernetes.io/projected/97761d43-e004-4dd0-9648-9ef68fbf7d18-kube-api-access-d4z7n\") pod \"97761d43-e004-4dd0-9648-9ef68fbf7d18\" (UID: \"97761d43-e004-4dd0-9648-9ef68fbf7d18\") " Oct 14 13:33:39 crc kubenswrapper[4725]: I1014 13:33:39.967544 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96ng6\" (UniqueName: \"kubernetes.io/projected/bacf7fe9-2e8f-4909-9761-e73e78ec0008-kube-api-access-96ng6\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:39 crc kubenswrapper[4725]: I1014 13:33:39.971675 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97761d43-e004-4dd0-9648-9ef68fbf7d18-kube-api-access-d4z7n" (OuterVolumeSpecName: "kube-api-access-d4z7n") pod "97761d43-e004-4dd0-9648-9ef68fbf7d18" (UID: "97761d43-e004-4dd0-9648-9ef68fbf7d18"). InnerVolumeSpecName "kube-api-access-d4z7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:33:39 crc kubenswrapper[4725]: I1014 13:33:39.971942 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bbd0716-da3f-4287-9ede-533325e1b42e-kube-api-access-rl4jk" (OuterVolumeSpecName: "kube-api-access-rl4jk") pod "6bbd0716-da3f-4287-9ede-533325e1b42e" (UID: "6bbd0716-da3f-4287-9ede-533325e1b42e"). InnerVolumeSpecName "kube-api-access-rl4jk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.068731 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4z7n\" (UniqueName: \"kubernetes.io/projected/97761d43-e004-4dd0-9648-9ef68fbf7d18-kube-api-access-d4z7n\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.069015 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl4jk\" (UniqueName: \"kubernetes.io/projected/6bbd0716-da3f-4287-9ede-533325e1b42e-kube-api-access-rl4jk\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.256590 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8afd9cea-f51f-4174-b856-90b357779daf","Type":"ContainerStarted","Data":"faa8ce150dae5ed30505c51fc139d4c2bd9fd2214038151e593c27c7eb0a4fc3"} Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.264986 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cfe2404-ed55-4965-830e-88dbdd3bb9b6","Type":"ContainerStarted","Data":"3204b74695bf3652a70ddb7eae3867d114dab22433a1c9381171368c69a8861c"} Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.269287 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lp5lp" event={"ID":"97761d43-e004-4dd0-9648-9ef68fbf7d18","Type":"ContainerDied","Data":"3e09374942ea8ea11e70c53e33cac24acf8d8d3721086d7a26c71f5436f55fe7"} Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.269315 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e09374942ea8ea11e70c53e33cac24acf8d8d3721086d7a26c71f5436f55fe7" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.269364 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lp5lp" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.277074 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c7c1e20-3ab6-42f7-81f6-aa6444a258ec","Type":"ContainerStarted","Data":"c29055b166697d7f8b08ed157d0ea33d1bbd39c849608170a77f298b6129fb34"} Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.280843 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4vdpc" event={"ID":"bacf7fe9-2e8f-4909-9761-e73e78ec0008","Type":"ContainerDied","Data":"1c6dc7eda2f33b7a91814e487e164cd9bd106eaa10179c4d30492500907bd84f"} Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.280863 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c6dc7eda2f33b7a91814e487e164cd9bd106eaa10179c4d30492500907bd84f" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.280914 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4vdpc" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.312957 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-42h4h" event={"ID":"6bbd0716-da3f-4287-9ede-533325e1b42e","Type":"ContainerDied","Data":"fb5e9c32d3bdb477bc01948acc1aa38150c335721c71c486c086da398a1e7d92"} Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.312990 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb5e9c32d3bdb477bc01948acc1aa38150c335721c71c486c086da398a1e7d92" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.313049 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-42h4h" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.323974 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.324269 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b08afeea-2257-428e-be50-e2cf1b5cc67e","Type":"ContainerDied","Data":"aa6e15a2b01291e75edaffee8e7d12423af2379662fe30427fe0c2ca0de5bd85"} Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.324457 4725 scope.go:117] "RemoveContainer" containerID="2486c7747b7a5763b53b0bb9a1d4e8ac0456911909b2c21272424b1eab3618b4" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.420292 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.498796 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.528537 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 13:33:40 crc kubenswrapper[4725]: E1014 13:33:40.529032 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bbd0716-da3f-4287-9ede-533325e1b42e" containerName="mariadb-database-create" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.529057 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bbd0716-da3f-4287-9ede-533325e1b42e" containerName="mariadb-database-create" Oct 14 13:33:40 crc kubenswrapper[4725]: E1014 13:33:40.529080 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bacf7fe9-2e8f-4909-9761-e73e78ec0008" containerName="mariadb-database-create" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.529089 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="bacf7fe9-2e8f-4909-9761-e73e78ec0008" containerName="mariadb-database-create" Oct 14 13:33:40 crc kubenswrapper[4725]: E1014 13:33:40.529109 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97761d43-e004-4dd0-9648-9ef68fbf7d18" containerName="mariadb-database-create" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.529117 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="97761d43-e004-4dd0-9648-9ef68fbf7d18" containerName="mariadb-database-create" Oct 14 13:33:40 crc kubenswrapper[4725]: E1014 13:33:40.529145 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b08afeea-2257-428e-be50-e2cf1b5cc67e" containerName="kube-state-metrics" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.529152 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b08afeea-2257-428e-be50-e2cf1b5cc67e" containerName="kube-state-metrics" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.529342 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bbd0716-da3f-4287-9ede-533325e1b42e" containerName="mariadb-database-create" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.529365 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="bacf7fe9-2e8f-4909-9761-e73e78ec0008" containerName="mariadb-database-create" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.529377 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="97761d43-e004-4dd0-9648-9ef68fbf7d18" containerName="mariadb-database-create" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.529395 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="b08afeea-2257-428e-be50-e2cf1b5cc67e" containerName="kube-state-metrics" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.530122 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.533784 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.538188 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.543342 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.584408 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/545ffde3-abf2-443a-853d-c4d2a35a7e56-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"545ffde3-abf2-443a-853d-c4d2a35a7e56\") " pod="openstack/kube-state-metrics-0" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.584509 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/545ffde3-abf2-443a-853d-c4d2a35a7e56-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"545ffde3-abf2-443a-853d-c4d2a35a7e56\") " pod="openstack/kube-state-metrics-0" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.584553 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/545ffde3-abf2-443a-853d-c4d2a35a7e56-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"545ffde3-abf2-443a-853d-c4d2a35a7e56\") " pod="openstack/kube-state-metrics-0" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.584640 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppcfp\" (UniqueName: \"kubernetes.io/projected/545ffde3-abf2-443a-853d-c4d2a35a7e56-kube-api-access-ppcfp\") pod \"kube-state-metrics-0\" (UID: \"545ffde3-abf2-443a-853d-c4d2a35a7e56\") " pod="openstack/kube-state-metrics-0" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.685875 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/545ffde3-abf2-443a-853d-c4d2a35a7e56-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"545ffde3-abf2-443a-853d-c4d2a35a7e56\") " pod="openstack/kube-state-metrics-0" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.686244 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/545ffde3-abf2-443a-853d-c4d2a35a7e56-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"545ffde3-abf2-443a-853d-c4d2a35a7e56\") " pod="openstack/kube-state-metrics-0" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.686295 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/545ffde3-abf2-443a-853d-c4d2a35a7e56-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"545ffde3-abf2-443a-853d-c4d2a35a7e56\") " pod="openstack/kube-state-metrics-0" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.686395 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppcfp\" (UniqueName: \"kubernetes.io/projected/545ffde3-abf2-443a-853d-c4d2a35a7e56-kube-api-access-ppcfp\") pod \"kube-state-metrics-0\" (UID: \"545ffde3-abf2-443a-853d-c4d2a35a7e56\") " pod="openstack/kube-state-metrics-0" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.701835 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/545ffde3-abf2-443a-853d-c4d2a35a7e56-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"545ffde3-abf2-443a-853d-c4d2a35a7e56\") " pod="openstack/kube-state-metrics-0" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.705124 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppcfp\" (UniqueName: \"kubernetes.io/projected/545ffde3-abf2-443a-853d-c4d2a35a7e56-kube-api-access-ppcfp\") pod \"kube-state-metrics-0\" (UID: \"545ffde3-abf2-443a-853d-c4d2a35a7e56\") " pod="openstack/kube-state-metrics-0" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.705624 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/545ffde3-abf2-443a-853d-c4d2a35a7e56-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"545ffde3-abf2-443a-853d-c4d2a35a7e56\") " pod="openstack/kube-state-metrics-0" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.706007 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/545ffde3-abf2-443a-853d-c4d2a35a7e56-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"545ffde3-abf2-443a-853d-c4d2a35a7e56\") " pod="openstack/kube-state-metrics-0" Oct 14 13:33:40 crc kubenswrapper[4725]: I1014 13:33:40.851759 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 14 13:33:41 crc kubenswrapper[4725]: I1014 13:33:41.337223 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8afd9cea-f51f-4174-b856-90b357779daf","Type":"ContainerStarted","Data":"90261c857e77a631f8074d600380efed62c455ca2325fa411231c4727767ab68"} Oct 14 13:33:41 crc kubenswrapper[4725]: I1014 13:33:41.338766 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cfe2404-ed55-4965-830e-88dbdd3bb9b6","Type":"ContainerStarted","Data":"22369bc33614d7c19adb78143f613915e3102af33b0ed49a0fb445ed4bc3f19c"} Oct 14 13:33:41 crc kubenswrapper[4725]: I1014 13:33:41.340939 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c7c1e20-3ab6-42f7-81f6-aa6444a258ec","Type":"ContainerStarted","Data":"abe388101f2d78d0c5f4862b91ef7b1012b4244210b3470f9973c1dbc5398492"} Oct 14 13:33:41 crc kubenswrapper[4725]: I1014 13:33:41.342935 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 14 13:33:41 crc kubenswrapper[4725]: I1014 13:33:41.364715 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.364689525 podStartE2EDuration="4.364689525s" podCreationTimestamp="2025-10-14 13:33:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:33:41.359257805 +0000 UTC m=+1138.207692634" watchObservedRunningTime="2025-10-14 13:33:41.364689525 +0000 UTC m=+1138.213124344" Oct 14 13:33:41 crc kubenswrapper[4725]: I1014 13:33:41.835991 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.835969179 podStartE2EDuration="4.835969179s" podCreationTimestamp="2025-10-14 13:33:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:33:41.394043927 +0000 UTC m=+1138.242478756" watchObservedRunningTime="2025-10-14 13:33:41.835969179 +0000 UTC m=+1138.684403988" Oct 14 13:33:41 crc kubenswrapper[4725]: I1014 13:33:41.845198 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 13:33:41 crc kubenswrapper[4725]: I1014 13:33:41.933040 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b08afeea-2257-428e-be50-e2cf1b5cc67e" path="/var/lib/kubelet/pods/b08afeea-2257-428e-be50-e2cf1b5cc67e/volumes" Oct 14 13:33:42 crc kubenswrapper[4725]: I1014 13:33:42.367616 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cfe2404-ed55-4965-830e-88dbdd3bb9b6","Type":"ContainerStarted","Data":"e9d550016739b5296af7c5aa88b98f67b00d4d576119e877414138779d093732"} Oct 14 13:33:42 crc kubenswrapper[4725]: I1014 13:33:42.369432 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"545ffde3-abf2-443a-853d-c4d2a35a7e56","Type":"ContainerStarted","Data":"159110d2fdc5bb049f8ea1cc6ee3f9fc3f58b7cf56a84b8a199c6f9528ae1cba"} Oct 14 13:33:42 crc kubenswrapper[4725]: I1014 13:33:42.369509 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"545ffde3-abf2-443a-853d-c4d2a35a7e56","Type":"ContainerStarted","Data":"249467a5ca9de6c2e4fccf8dafc81ebb3826c5240706d266787b633c1e7c501e"} Oct 14 13:33:42 crc kubenswrapper[4725]: I1014 13:33:42.386663 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.039792814 podStartE2EDuration="2.386646337s" podCreationTimestamp="2025-10-14 13:33:40 +0000 UTC" firstStartedPulling="2025-10-14 13:33:41.348916409 +0000 UTC m=+1138.197351218" lastFinishedPulling="2025-10-14 13:33:41.695769932 +0000 UTC m=+1138.544204741" observedRunningTime="2025-10-14 13:33:42.384508849 +0000 UTC m=+1139.232943668" watchObservedRunningTime="2025-10-14 13:33:42.386646337 +0000 UTC m=+1139.235081146" Oct 14 13:33:43 crc kubenswrapper[4725]: I1014 13:33:43.380257 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cfe2404-ed55-4965-830e-88dbdd3bb9b6","Type":"ContainerStarted","Data":"caf1b9311bcb019af7063c20c4f48239a98c1a5dfa483822085eb3949000705c"} Oct 14 13:33:43 crc kubenswrapper[4725]: I1014 13:33:43.380607 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 14 13:33:43 crc kubenswrapper[4725]: I1014 13:33:43.380687 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9cfe2404-ed55-4965-830e-88dbdd3bb9b6" containerName="ceilometer-central-agent" containerID="cri-o://3204b74695bf3652a70ddb7eae3867d114dab22433a1c9381171368c69a8861c" gracePeriod=30 Oct 14 13:33:43 crc kubenswrapper[4725]: I1014 13:33:43.380967 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9cfe2404-ed55-4965-830e-88dbdd3bb9b6" containerName="proxy-httpd" containerID="cri-o://caf1b9311bcb019af7063c20c4f48239a98c1a5dfa483822085eb3949000705c" gracePeriod=30 Oct 14 13:33:43 crc kubenswrapper[4725]: I1014 13:33:43.381103 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9cfe2404-ed55-4965-830e-88dbdd3bb9b6" containerName="ceilometer-notification-agent" containerID="cri-o://22369bc33614d7c19adb78143f613915e3102af33b0ed49a0fb445ed4bc3f19c" gracePeriod=30 Oct 14 13:33:43 crc kubenswrapper[4725]: I1014 13:33:43.381148 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9cfe2404-ed55-4965-830e-88dbdd3bb9b6" containerName="sg-core" containerID="cri-o://e9d550016739b5296af7c5aa88b98f67b00d4d576119e877414138779d093732" gracePeriod=30 Oct 14 13:33:43 crc kubenswrapper[4725]: I1014 13:33:43.414000 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.454453457 podStartE2EDuration="6.413980079s" podCreationTimestamp="2025-10-14 13:33:37 +0000 UTC" firstStartedPulling="2025-10-14 13:33:38.554956991 +0000 UTC m=+1135.403391800" lastFinishedPulling="2025-10-14 13:33:42.514483613 +0000 UTC m=+1139.362918422" observedRunningTime="2025-10-14 13:33:43.409672589 +0000 UTC m=+1140.258107398" watchObservedRunningTime="2025-10-14 13:33:43.413980079 +0000 UTC m=+1140.262414878" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.265029 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.356421 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-log-httpd\") pod \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\" (UID: \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\") " Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.356536 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-scripts\") pod \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\" (UID: \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\") " Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.356565 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-config-data\") pod \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\" (UID: \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\") " Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.356722 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-combined-ca-bundle\") pod \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\" (UID: \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\") " Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.356755 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-run-httpd\") pod \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\" (UID: \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\") " Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.356793 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7nfz\" (UniqueName: \"kubernetes.io/projected/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-kube-api-access-b7nfz\") pod \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\" (UID: \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\") " Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.356893 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-sg-core-conf-yaml\") pod \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\" (UID: \"9cfe2404-ed55-4965-830e-88dbdd3bb9b6\") " Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.357093 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9cfe2404-ed55-4965-830e-88dbdd3bb9b6" (UID: "9cfe2404-ed55-4965-830e-88dbdd3bb9b6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.357213 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9cfe2404-ed55-4965-830e-88dbdd3bb9b6" (UID: "9cfe2404-ed55-4965-830e-88dbdd3bb9b6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.357715 4725 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.357741 4725 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.366925 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-kube-api-access-b7nfz" (OuterVolumeSpecName: "kube-api-access-b7nfz") pod "9cfe2404-ed55-4965-830e-88dbdd3bb9b6" (UID: "9cfe2404-ed55-4965-830e-88dbdd3bb9b6"). InnerVolumeSpecName "kube-api-access-b7nfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.370570 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-scripts" (OuterVolumeSpecName: "scripts") pod "9cfe2404-ed55-4965-830e-88dbdd3bb9b6" (UID: "9cfe2404-ed55-4965-830e-88dbdd3bb9b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.396988 4725 generic.go:334] "Generic (PLEG): container finished" podID="9cfe2404-ed55-4965-830e-88dbdd3bb9b6" containerID="caf1b9311bcb019af7063c20c4f48239a98c1a5dfa483822085eb3949000705c" exitCode=0 Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.397017 4725 generic.go:334] "Generic (PLEG): container finished" podID="9cfe2404-ed55-4965-830e-88dbdd3bb9b6" containerID="e9d550016739b5296af7c5aa88b98f67b00d4d576119e877414138779d093732" exitCode=2 Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.397025 4725 generic.go:334] "Generic (PLEG): container finished" podID="9cfe2404-ed55-4965-830e-88dbdd3bb9b6" containerID="22369bc33614d7c19adb78143f613915e3102af33b0ed49a0fb445ed4bc3f19c" exitCode=0 Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.397032 4725 generic.go:334] "Generic (PLEG): container finished" podID="9cfe2404-ed55-4965-830e-88dbdd3bb9b6" containerID="3204b74695bf3652a70ddb7eae3867d114dab22433a1c9381171368c69a8861c" exitCode=0 Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.397607 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.398561 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cfe2404-ed55-4965-830e-88dbdd3bb9b6","Type":"ContainerDied","Data":"caf1b9311bcb019af7063c20c4f48239a98c1a5dfa483822085eb3949000705c"} Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.398606 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cfe2404-ed55-4965-830e-88dbdd3bb9b6","Type":"ContainerDied","Data":"e9d550016739b5296af7c5aa88b98f67b00d4d576119e877414138779d093732"} Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.398625 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cfe2404-ed55-4965-830e-88dbdd3bb9b6","Type":"ContainerDied","Data":"22369bc33614d7c19adb78143f613915e3102af33b0ed49a0fb445ed4bc3f19c"} Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.398638 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cfe2404-ed55-4965-830e-88dbdd3bb9b6","Type":"ContainerDied","Data":"3204b74695bf3652a70ddb7eae3867d114dab22433a1c9381171368c69a8861c"} Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.398650 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cfe2404-ed55-4965-830e-88dbdd3bb9b6","Type":"ContainerDied","Data":"97b75ed923dff3a867ac77ea21fd04ce38d4c2cd957dadf3de28a3a5acbb9c84"} Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.398681 4725 scope.go:117] "RemoveContainer" containerID="caf1b9311bcb019af7063c20c4f48239a98c1a5dfa483822085eb3949000705c" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.408921 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9cfe2404-ed55-4965-830e-88dbdd3bb9b6" (UID: "9cfe2404-ed55-4965-830e-88dbdd3bb9b6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.426575 4725 scope.go:117] "RemoveContainer" containerID="e9d550016739b5296af7c5aa88b98f67b00d4d576119e877414138779d093732" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.446775 4725 scope.go:117] "RemoveContainer" containerID="22369bc33614d7c19adb78143f613915e3102af33b0ed49a0fb445ed4bc3f19c" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.459312 4725 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.459984 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.460118 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7nfz\" (UniqueName: \"kubernetes.io/projected/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-kube-api-access-b7nfz\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.469012 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9cfe2404-ed55-4965-830e-88dbdd3bb9b6" (UID: "9cfe2404-ed55-4965-830e-88dbdd3bb9b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.485059 4725 scope.go:117] "RemoveContainer" containerID="3204b74695bf3652a70ddb7eae3867d114dab22433a1c9381171368c69a8861c" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.513844 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-config-data" (OuterVolumeSpecName: "config-data") pod "9cfe2404-ed55-4965-830e-88dbdd3bb9b6" (UID: "9cfe2404-ed55-4965-830e-88dbdd3bb9b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.562728 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.562751 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cfe2404-ed55-4965-830e-88dbdd3bb9b6-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.603776 4725 scope.go:117] "RemoveContainer" containerID="caf1b9311bcb019af7063c20c4f48239a98c1a5dfa483822085eb3949000705c" Oct 14 13:33:44 crc kubenswrapper[4725]: E1014 13:33:44.604264 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caf1b9311bcb019af7063c20c4f48239a98c1a5dfa483822085eb3949000705c\": container with ID starting with caf1b9311bcb019af7063c20c4f48239a98c1a5dfa483822085eb3949000705c not found: ID does not exist" containerID="caf1b9311bcb019af7063c20c4f48239a98c1a5dfa483822085eb3949000705c" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.604298 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caf1b9311bcb019af7063c20c4f48239a98c1a5dfa483822085eb3949000705c"} err="failed to get container status \"caf1b9311bcb019af7063c20c4f48239a98c1a5dfa483822085eb3949000705c\": rpc error: code = NotFound desc = could not find container \"caf1b9311bcb019af7063c20c4f48239a98c1a5dfa483822085eb3949000705c\": container with ID starting with caf1b9311bcb019af7063c20c4f48239a98c1a5dfa483822085eb3949000705c not found: ID does not exist" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.604324 4725 scope.go:117] "RemoveContainer" containerID="e9d550016739b5296af7c5aa88b98f67b00d4d576119e877414138779d093732" Oct 14 13:33:44 crc kubenswrapper[4725]: E1014 13:33:44.605906 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9d550016739b5296af7c5aa88b98f67b00d4d576119e877414138779d093732\": container with ID starting with e9d550016739b5296af7c5aa88b98f67b00d4d576119e877414138779d093732 not found: ID does not exist" containerID="e9d550016739b5296af7c5aa88b98f67b00d4d576119e877414138779d093732" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.605950 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9d550016739b5296af7c5aa88b98f67b00d4d576119e877414138779d093732"} err="failed to get container status \"e9d550016739b5296af7c5aa88b98f67b00d4d576119e877414138779d093732\": rpc error: code = NotFound desc = could not find container \"e9d550016739b5296af7c5aa88b98f67b00d4d576119e877414138779d093732\": container with ID starting with e9d550016739b5296af7c5aa88b98f67b00d4d576119e877414138779d093732 not found: ID does not exist" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.605966 4725 scope.go:117] "RemoveContainer" containerID="22369bc33614d7c19adb78143f613915e3102af33b0ed49a0fb445ed4bc3f19c" Oct 14 13:33:44 crc kubenswrapper[4725]: E1014 13:33:44.607056 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22369bc33614d7c19adb78143f613915e3102af33b0ed49a0fb445ed4bc3f19c\": container with ID starting with 22369bc33614d7c19adb78143f613915e3102af33b0ed49a0fb445ed4bc3f19c not found: ID does not exist" containerID="22369bc33614d7c19adb78143f613915e3102af33b0ed49a0fb445ed4bc3f19c" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.607095 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22369bc33614d7c19adb78143f613915e3102af33b0ed49a0fb445ed4bc3f19c"} err="failed to get container status \"22369bc33614d7c19adb78143f613915e3102af33b0ed49a0fb445ed4bc3f19c\": rpc error: code = NotFound desc = could not find container \"22369bc33614d7c19adb78143f613915e3102af33b0ed49a0fb445ed4bc3f19c\": container with ID starting with 22369bc33614d7c19adb78143f613915e3102af33b0ed49a0fb445ed4bc3f19c not found: ID does not exist" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.607129 4725 scope.go:117] "RemoveContainer" containerID="3204b74695bf3652a70ddb7eae3867d114dab22433a1c9381171368c69a8861c" Oct 14 13:33:44 crc kubenswrapper[4725]: E1014 13:33:44.612087 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3204b74695bf3652a70ddb7eae3867d114dab22433a1c9381171368c69a8861c\": container with ID starting with 3204b74695bf3652a70ddb7eae3867d114dab22433a1c9381171368c69a8861c not found: ID does not exist" containerID="3204b74695bf3652a70ddb7eae3867d114dab22433a1c9381171368c69a8861c" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.612126 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3204b74695bf3652a70ddb7eae3867d114dab22433a1c9381171368c69a8861c"} err="failed to get container status \"3204b74695bf3652a70ddb7eae3867d114dab22433a1c9381171368c69a8861c\": rpc error: code = NotFound desc = could not find container \"3204b74695bf3652a70ddb7eae3867d114dab22433a1c9381171368c69a8861c\": container with ID starting with 3204b74695bf3652a70ddb7eae3867d114dab22433a1c9381171368c69a8861c not found: ID does not exist" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.612153 4725 scope.go:117] "RemoveContainer" containerID="caf1b9311bcb019af7063c20c4f48239a98c1a5dfa483822085eb3949000705c" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.612981 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caf1b9311bcb019af7063c20c4f48239a98c1a5dfa483822085eb3949000705c"} err="failed to get container status \"caf1b9311bcb019af7063c20c4f48239a98c1a5dfa483822085eb3949000705c\": rpc error: code = NotFound desc = could not find container \"caf1b9311bcb019af7063c20c4f48239a98c1a5dfa483822085eb3949000705c\": container with ID starting with caf1b9311bcb019af7063c20c4f48239a98c1a5dfa483822085eb3949000705c not found: ID does not exist" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.613033 4725 scope.go:117] "RemoveContainer" containerID="e9d550016739b5296af7c5aa88b98f67b00d4d576119e877414138779d093732" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.613693 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9d550016739b5296af7c5aa88b98f67b00d4d576119e877414138779d093732"} err="failed to get container status \"e9d550016739b5296af7c5aa88b98f67b00d4d576119e877414138779d093732\": rpc error: code = NotFound desc = could not find container \"e9d550016739b5296af7c5aa88b98f67b00d4d576119e877414138779d093732\": container with ID starting with e9d550016739b5296af7c5aa88b98f67b00d4d576119e877414138779d093732 not found: ID does not exist" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.613737 4725 scope.go:117] "RemoveContainer" containerID="22369bc33614d7c19adb78143f613915e3102af33b0ed49a0fb445ed4bc3f19c" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.614000 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22369bc33614d7c19adb78143f613915e3102af33b0ed49a0fb445ed4bc3f19c"} err="failed to get container status \"22369bc33614d7c19adb78143f613915e3102af33b0ed49a0fb445ed4bc3f19c\": rpc error: code = NotFound desc = could not find container \"22369bc33614d7c19adb78143f613915e3102af33b0ed49a0fb445ed4bc3f19c\": container with ID starting with 22369bc33614d7c19adb78143f613915e3102af33b0ed49a0fb445ed4bc3f19c not found: ID does not exist" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.614019 4725 scope.go:117] "RemoveContainer" containerID="3204b74695bf3652a70ddb7eae3867d114dab22433a1c9381171368c69a8861c" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.615089 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3204b74695bf3652a70ddb7eae3867d114dab22433a1c9381171368c69a8861c"} err="failed to get container status \"3204b74695bf3652a70ddb7eae3867d114dab22433a1c9381171368c69a8861c\": rpc error: code = NotFound desc = could not find container \"3204b74695bf3652a70ddb7eae3867d114dab22433a1c9381171368c69a8861c\": container with ID starting with 3204b74695bf3652a70ddb7eae3867d114dab22433a1c9381171368c69a8861c not found: ID does not exist" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.615112 4725 scope.go:117] "RemoveContainer" containerID="caf1b9311bcb019af7063c20c4f48239a98c1a5dfa483822085eb3949000705c" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.615625 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caf1b9311bcb019af7063c20c4f48239a98c1a5dfa483822085eb3949000705c"} err="failed to get container status \"caf1b9311bcb019af7063c20c4f48239a98c1a5dfa483822085eb3949000705c\": rpc error: code = NotFound desc = could not find container \"caf1b9311bcb019af7063c20c4f48239a98c1a5dfa483822085eb3949000705c\": container with ID starting with caf1b9311bcb019af7063c20c4f48239a98c1a5dfa483822085eb3949000705c not found: ID does not exist" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.615657 4725 scope.go:117] "RemoveContainer" containerID="e9d550016739b5296af7c5aa88b98f67b00d4d576119e877414138779d093732" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.616028 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9d550016739b5296af7c5aa88b98f67b00d4d576119e877414138779d093732"} err="failed to get container status \"e9d550016739b5296af7c5aa88b98f67b00d4d576119e877414138779d093732\": rpc error: code = NotFound desc = could not find container \"e9d550016739b5296af7c5aa88b98f67b00d4d576119e877414138779d093732\": container with ID starting with e9d550016739b5296af7c5aa88b98f67b00d4d576119e877414138779d093732 not found: ID does not exist" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.616051 4725 scope.go:117] "RemoveContainer" containerID="22369bc33614d7c19adb78143f613915e3102af33b0ed49a0fb445ed4bc3f19c" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.617999 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22369bc33614d7c19adb78143f613915e3102af33b0ed49a0fb445ed4bc3f19c"} err="failed to get container status \"22369bc33614d7c19adb78143f613915e3102af33b0ed49a0fb445ed4bc3f19c\": rpc error: code = NotFound desc = could not find container \"22369bc33614d7c19adb78143f613915e3102af33b0ed49a0fb445ed4bc3f19c\": container with ID starting with 22369bc33614d7c19adb78143f613915e3102af33b0ed49a0fb445ed4bc3f19c not found: ID does not exist" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.618042 4725 scope.go:117] "RemoveContainer" containerID="3204b74695bf3652a70ddb7eae3867d114dab22433a1c9381171368c69a8861c" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.618730 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3204b74695bf3652a70ddb7eae3867d114dab22433a1c9381171368c69a8861c"} err="failed to get container status \"3204b74695bf3652a70ddb7eae3867d114dab22433a1c9381171368c69a8861c\": rpc error: code = NotFound desc = could not find container \"3204b74695bf3652a70ddb7eae3867d114dab22433a1c9381171368c69a8861c\": container with ID starting with 3204b74695bf3652a70ddb7eae3867d114dab22433a1c9381171368c69a8861c not found: ID does not exist" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.618769 4725 scope.go:117] "RemoveContainer" containerID="caf1b9311bcb019af7063c20c4f48239a98c1a5dfa483822085eb3949000705c" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.619260 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caf1b9311bcb019af7063c20c4f48239a98c1a5dfa483822085eb3949000705c"} err="failed to get container status \"caf1b9311bcb019af7063c20c4f48239a98c1a5dfa483822085eb3949000705c\": rpc error: code = NotFound desc = could not find container \"caf1b9311bcb019af7063c20c4f48239a98c1a5dfa483822085eb3949000705c\": container with ID starting with caf1b9311bcb019af7063c20c4f48239a98c1a5dfa483822085eb3949000705c not found: ID does not exist" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.619307 4725 scope.go:117] "RemoveContainer" containerID="e9d550016739b5296af7c5aa88b98f67b00d4d576119e877414138779d093732" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.620292 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9d550016739b5296af7c5aa88b98f67b00d4d576119e877414138779d093732"} err="failed to get container status \"e9d550016739b5296af7c5aa88b98f67b00d4d576119e877414138779d093732\": rpc error: code = NotFound desc = could not find container \"e9d550016739b5296af7c5aa88b98f67b00d4d576119e877414138779d093732\": container with ID starting with e9d550016739b5296af7c5aa88b98f67b00d4d576119e877414138779d093732 not found: ID does not exist" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.620384 4725 scope.go:117] "RemoveContainer" containerID="22369bc33614d7c19adb78143f613915e3102af33b0ed49a0fb445ed4bc3f19c" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.621391 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22369bc33614d7c19adb78143f613915e3102af33b0ed49a0fb445ed4bc3f19c"} err="failed to get container status \"22369bc33614d7c19adb78143f613915e3102af33b0ed49a0fb445ed4bc3f19c\": rpc error: code = NotFound desc = could not find container \"22369bc33614d7c19adb78143f613915e3102af33b0ed49a0fb445ed4bc3f19c\": container with ID starting with 22369bc33614d7c19adb78143f613915e3102af33b0ed49a0fb445ed4bc3f19c not found: ID does not exist" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.621418 4725 scope.go:117] "RemoveContainer" containerID="3204b74695bf3652a70ddb7eae3867d114dab22433a1c9381171368c69a8861c" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.622086 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3204b74695bf3652a70ddb7eae3867d114dab22433a1c9381171368c69a8861c"} err="failed to get container status \"3204b74695bf3652a70ddb7eae3867d114dab22433a1c9381171368c69a8861c\": rpc error: code = NotFound desc = could not find container \"3204b74695bf3652a70ddb7eae3867d114dab22433a1c9381171368c69a8861c\": container with ID starting with 3204b74695bf3652a70ddb7eae3867d114dab22433a1c9381171368c69a8861c not found: ID does not exist" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.775690 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-7264-account-create-4skdt"] Oct 14 13:33:44 crc kubenswrapper[4725]: E1014 13:33:44.776137 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cfe2404-ed55-4965-830e-88dbdd3bb9b6" containerName="proxy-httpd" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.776159 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cfe2404-ed55-4965-830e-88dbdd3bb9b6" containerName="proxy-httpd" Oct 14 13:33:44 crc kubenswrapper[4725]: E1014 13:33:44.776173 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cfe2404-ed55-4965-830e-88dbdd3bb9b6" containerName="ceilometer-central-agent" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.776180 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cfe2404-ed55-4965-830e-88dbdd3bb9b6" containerName="ceilometer-central-agent" Oct 14 13:33:44 crc kubenswrapper[4725]: E1014 13:33:44.776205 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cfe2404-ed55-4965-830e-88dbdd3bb9b6" containerName="sg-core" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.776213 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cfe2404-ed55-4965-830e-88dbdd3bb9b6" containerName="sg-core" Oct 14 13:33:44 crc kubenswrapper[4725]: E1014 13:33:44.776227 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cfe2404-ed55-4965-830e-88dbdd3bb9b6" containerName="ceilometer-notification-agent" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.776236 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cfe2404-ed55-4965-830e-88dbdd3bb9b6" containerName="ceilometer-notification-agent" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.776488 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cfe2404-ed55-4965-830e-88dbdd3bb9b6" containerName="ceilometer-central-agent" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.776508 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cfe2404-ed55-4965-830e-88dbdd3bb9b6" containerName="proxy-httpd" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.776523 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cfe2404-ed55-4965-830e-88dbdd3bb9b6" containerName="ceilometer-notification-agent" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.776532 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cfe2404-ed55-4965-830e-88dbdd3bb9b6" containerName="sg-core" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.777207 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7264-account-create-4skdt" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.780382 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.804283 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-7264-account-create-4skdt"] Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.859655 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.872971 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cpv2\" (UniqueName: \"kubernetes.io/projected/8c596e70-a030-4573-9cbe-3782c7c02fea-kube-api-access-4cpv2\") pod \"nova-api-7264-account-create-4skdt\" (UID: \"8c596e70-a030-4573-9cbe-3782c7c02fea\") " pod="openstack/nova-api-7264-account-create-4skdt" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.894913 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.911839 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.914535 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.918201 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.918247 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.918519 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.920548 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.975017 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\") " pod="openstack/ceilometer-0" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.975073 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-run-httpd\") pod \"ceilometer-0\" (UID: \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\") " pod="openstack/ceilometer-0" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.975109 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\") " pod="openstack/ceilometer-0" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.975378 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cpv2\" (UniqueName: \"kubernetes.io/projected/8c596e70-a030-4573-9cbe-3782c7c02fea-kube-api-access-4cpv2\") pod \"nova-api-7264-account-create-4skdt\" (UID: \"8c596e70-a030-4573-9cbe-3782c7c02fea\") " pod="openstack/nova-api-7264-account-create-4skdt" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.975426 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-log-httpd\") pod \"ceilometer-0\" (UID: \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\") " pod="openstack/ceilometer-0" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.975480 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-config-data\") pod \"ceilometer-0\" (UID: \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\") " pod="openstack/ceilometer-0" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.975520 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb7fc\" (UniqueName: \"kubernetes.io/projected/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-kube-api-access-nb7fc\") pod \"ceilometer-0\" (UID: \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\") " pod="openstack/ceilometer-0" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.975554 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\") " pod="openstack/ceilometer-0" Oct 14 13:33:44 crc kubenswrapper[4725]: I1014 13:33:44.975614 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-scripts\") pod \"ceilometer-0\" (UID: \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\") " pod="openstack/ceilometer-0" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.016524 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cpv2\" (UniqueName: \"kubernetes.io/projected/8c596e70-a030-4573-9cbe-3782c7c02fea-kube-api-access-4cpv2\") pod \"nova-api-7264-account-create-4skdt\" (UID: \"8c596e70-a030-4573-9cbe-3782c7c02fea\") " pod="openstack/nova-api-7264-account-create-4skdt" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.067994 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-749ff78757-kdtzj" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.073233 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-749ff78757-kdtzj" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.077191 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-log-httpd\") pod \"ceilometer-0\" (UID: \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\") " pod="openstack/ceilometer-0" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.077247 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-config-data\") pod \"ceilometer-0\" (UID: \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\") " pod="openstack/ceilometer-0" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.077296 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb7fc\" (UniqueName: \"kubernetes.io/projected/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-kube-api-access-nb7fc\") pod \"ceilometer-0\" (UID: \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\") " pod="openstack/ceilometer-0" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.077319 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\") " pod="openstack/ceilometer-0" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.077361 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-scripts\") pod \"ceilometer-0\" (UID: \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\") " pod="openstack/ceilometer-0" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.077388 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\") " pod="openstack/ceilometer-0" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.077406 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-run-httpd\") pod \"ceilometer-0\" (UID: \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\") " pod="openstack/ceilometer-0" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.077426 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\") " pod="openstack/ceilometer-0" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.077719 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-log-httpd\") pod \"ceilometer-0\" (UID: \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\") " pod="openstack/ceilometer-0" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.078370 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-run-httpd\") pod \"ceilometer-0\" (UID: \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\") " pod="openstack/ceilometer-0" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.081394 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-config-data\") pod \"ceilometer-0\" (UID: \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\") " pod="openstack/ceilometer-0" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.082313 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\") " pod="openstack/ceilometer-0" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.085823 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-scripts\") pod \"ceilometer-0\" (UID: \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\") " pod="openstack/ceilometer-0" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.086198 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\") " pod="openstack/ceilometer-0" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.086248 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\") " pod="openstack/ceilometer-0" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.099604 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb7fc\" (UniqueName: \"kubernetes.io/projected/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-kube-api-access-nb7fc\") pod \"ceilometer-0\" (UID: \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\") " pod="openstack/ceilometer-0" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.146653 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9d5c84b44-vssnm" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.152332 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7264-account-create-4skdt" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.182152 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/551500de-77a9-4f28-ab4b-b8259e04804b-horizon-tls-certs\") pod \"551500de-77a9-4f28-ab4b-b8259e04804b\" (UID: \"551500de-77a9-4f28-ab4b-b8259e04804b\") " Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.182222 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/551500de-77a9-4f28-ab4b-b8259e04804b-scripts\") pod \"551500de-77a9-4f28-ab4b-b8259e04804b\" (UID: \"551500de-77a9-4f28-ab4b-b8259e04804b\") " Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.182249 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkvfr\" (UniqueName: \"kubernetes.io/projected/551500de-77a9-4f28-ab4b-b8259e04804b-kube-api-access-qkvfr\") pod \"551500de-77a9-4f28-ab4b-b8259e04804b\" (UID: \"551500de-77a9-4f28-ab4b-b8259e04804b\") " Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.182299 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/551500de-77a9-4f28-ab4b-b8259e04804b-logs\") pod \"551500de-77a9-4f28-ab4b-b8259e04804b\" (UID: \"551500de-77a9-4f28-ab4b-b8259e04804b\") " Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.182518 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/551500de-77a9-4f28-ab4b-b8259e04804b-horizon-secret-key\") pod \"551500de-77a9-4f28-ab4b-b8259e04804b\" (UID: \"551500de-77a9-4f28-ab4b-b8259e04804b\") " Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.182572 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/551500de-77a9-4f28-ab4b-b8259e04804b-combined-ca-bundle\") pod \"551500de-77a9-4f28-ab4b-b8259e04804b\" (UID: \"551500de-77a9-4f28-ab4b-b8259e04804b\") " Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.182608 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/551500de-77a9-4f28-ab4b-b8259e04804b-config-data\") pod \"551500de-77a9-4f28-ab4b-b8259e04804b\" (UID: \"551500de-77a9-4f28-ab4b-b8259e04804b\") " Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.182993 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/551500de-77a9-4f28-ab4b-b8259e04804b-logs" (OuterVolumeSpecName: "logs") pod "551500de-77a9-4f28-ab4b-b8259e04804b" (UID: "551500de-77a9-4f28-ab4b-b8259e04804b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.183398 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/551500de-77a9-4f28-ab4b-b8259e04804b-logs\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.186602 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/551500de-77a9-4f28-ab4b-b8259e04804b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "551500de-77a9-4f28-ab4b-b8259e04804b" (UID: "551500de-77a9-4f28-ab4b-b8259e04804b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.189951 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/551500de-77a9-4f28-ab4b-b8259e04804b-kube-api-access-qkvfr" (OuterVolumeSpecName: "kube-api-access-qkvfr") pod "551500de-77a9-4f28-ab4b-b8259e04804b" (UID: "551500de-77a9-4f28-ab4b-b8259e04804b"). InnerVolumeSpecName "kube-api-access-qkvfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.211909 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/551500de-77a9-4f28-ab4b-b8259e04804b-scripts" (OuterVolumeSpecName: "scripts") pod "551500de-77a9-4f28-ab4b-b8259e04804b" (UID: "551500de-77a9-4f28-ab4b-b8259e04804b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.217926 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/551500de-77a9-4f28-ab4b-b8259e04804b-config-data" (OuterVolumeSpecName: "config-data") pod "551500de-77a9-4f28-ab4b-b8259e04804b" (UID: "551500de-77a9-4f28-ab4b-b8259e04804b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.237503 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.238945 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/551500de-77a9-4f28-ab4b-b8259e04804b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "551500de-77a9-4f28-ab4b-b8259e04804b" (UID: "551500de-77a9-4f28-ab4b-b8259e04804b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.256595 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/551500de-77a9-4f28-ab4b-b8259e04804b-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "551500de-77a9-4f28-ab4b-b8259e04804b" (UID: "551500de-77a9-4f28-ab4b-b8259e04804b"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.285636 4725 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/551500de-77a9-4f28-ab4b-b8259e04804b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.285667 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/551500de-77a9-4f28-ab4b-b8259e04804b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.285676 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/551500de-77a9-4f28-ab4b-b8259e04804b-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.285697 4725 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/551500de-77a9-4f28-ab4b-b8259e04804b-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.285706 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/551500de-77a9-4f28-ab4b-b8259e04804b-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.285715 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkvfr\" (UniqueName: \"kubernetes.io/projected/551500de-77a9-4f28-ab4b-b8259e04804b-kube-api-access-qkvfr\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.416965 4725 generic.go:334] "Generic (PLEG): container finished" podID="551500de-77a9-4f28-ab4b-b8259e04804b" containerID="12820d3bbd64cf010de74b038f5a42c61026b19b4e28f3044936bcaf135f4e48" exitCode=137 Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.417008 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9d5c84b44-vssnm" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.417046 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d5c84b44-vssnm" event={"ID":"551500de-77a9-4f28-ab4b-b8259e04804b","Type":"ContainerDied","Data":"12820d3bbd64cf010de74b038f5a42c61026b19b4e28f3044936bcaf135f4e48"} Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.417405 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9d5c84b44-vssnm" event={"ID":"551500de-77a9-4f28-ab4b-b8259e04804b","Type":"ContainerDied","Data":"6e0d6c8cd3bf4408ac4759862b6535d587156545328edd7d6b909875f00ec9e8"} Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.417433 4725 scope.go:117] "RemoveContainer" containerID="2f06fee29194370b1ad8593cc12b848a47dce9aefe7a38963cecbdc075b21a71" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.458357 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-9d5c84b44-vssnm"] Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.471317 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-9d5c84b44-vssnm"] Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.594435 4725 scope.go:117] "RemoveContainer" containerID="12820d3bbd64cf010de74b038f5a42c61026b19b4e28f3044936bcaf135f4e48" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.613462 4725 scope.go:117] "RemoveContainer" containerID="2f06fee29194370b1ad8593cc12b848a47dce9aefe7a38963cecbdc075b21a71" Oct 14 13:33:45 crc kubenswrapper[4725]: E1014 13:33:45.613915 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f06fee29194370b1ad8593cc12b848a47dce9aefe7a38963cecbdc075b21a71\": container with ID starting with 2f06fee29194370b1ad8593cc12b848a47dce9aefe7a38963cecbdc075b21a71 not found: ID does not exist" containerID="2f06fee29194370b1ad8593cc12b848a47dce9aefe7a38963cecbdc075b21a71" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.613947 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f06fee29194370b1ad8593cc12b848a47dce9aefe7a38963cecbdc075b21a71"} err="failed to get container status \"2f06fee29194370b1ad8593cc12b848a47dce9aefe7a38963cecbdc075b21a71\": rpc error: code = NotFound desc = could not find container \"2f06fee29194370b1ad8593cc12b848a47dce9aefe7a38963cecbdc075b21a71\": container with ID starting with 2f06fee29194370b1ad8593cc12b848a47dce9aefe7a38963cecbdc075b21a71 not found: ID does not exist" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.613969 4725 scope.go:117] "RemoveContainer" containerID="12820d3bbd64cf010de74b038f5a42c61026b19b4e28f3044936bcaf135f4e48" Oct 14 13:33:45 crc kubenswrapper[4725]: E1014 13:33:45.614259 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12820d3bbd64cf010de74b038f5a42c61026b19b4e28f3044936bcaf135f4e48\": container with ID starting with 12820d3bbd64cf010de74b038f5a42c61026b19b4e28f3044936bcaf135f4e48 not found: ID does not exist" containerID="12820d3bbd64cf010de74b038f5a42c61026b19b4e28f3044936bcaf135f4e48" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.614302 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12820d3bbd64cf010de74b038f5a42c61026b19b4e28f3044936bcaf135f4e48"} err="failed to get container status \"12820d3bbd64cf010de74b038f5a42c61026b19b4e28f3044936bcaf135f4e48\": rpc error: code = NotFound desc = could not find container \"12820d3bbd64cf010de74b038f5a42c61026b19b4e28f3044936bcaf135f4e48\": container with ID starting with 12820d3bbd64cf010de74b038f5a42c61026b19b4e28f3044936bcaf135f4e48 not found: ID does not exist" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.683284 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-7264-account-create-4skdt"] Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.775888 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.931764 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="551500de-77a9-4f28-ab4b-b8259e04804b" path="/var/lib/kubelet/pods/551500de-77a9-4f28-ab4b-b8259e04804b/volumes" Oct 14 13:33:45 crc kubenswrapper[4725]: I1014 13:33:45.932655 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cfe2404-ed55-4965-830e-88dbdd3bb9b6" path="/var/lib/kubelet/pods/9cfe2404-ed55-4965-830e-88dbdd3bb9b6/volumes" Oct 14 13:33:46 crc kubenswrapper[4725]: I1014 13:33:46.426287 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b","Type":"ContainerStarted","Data":"6d77dbb56b5857601d5150bfc464c2606ffbed8bf5af5a233661dc4a7a8aae53"} Oct 14 13:33:46 crc kubenswrapper[4725]: I1014 13:33:46.428861 4725 generic.go:334] "Generic (PLEG): container finished" podID="8c596e70-a030-4573-9cbe-3782c7c02fea" containerID="bbb0e62b1056d3930e3ca49ec5814dbd5caaf13a2c466858a065aec89fef9b5e" exitCode=0 Oct 14 13:33:46 crc kubenswrapper[4725]: I1014 13:33:46.428914 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7264-account-create-4skdt" event={"ID":"8c596e70-a030-4573-9cbe-3782c7c02fea","Type":"ContainerDied","Data":"bbb0e62b1056d3930e3ca49ec5814dbd5caaf13a2c466858a065aec89fef9b5e"} Oct 14 13:33:46 crc kubenswrapper[4725]: I1014 13:33:46.428983 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7264-account-create-4skdt" event={"ID":"8c596e70-a030-4573-9cbe-3782c7c02fea","Type":"ContainerStarted","Data":"def1327e366c94054813a5fcdbcd07ba6997518100e8a8f0736fb9aac0f65116"} Oct 14 13:33:47 crc kubenswrapper[4725]: I1014 13:33:47.439495 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b","Type":"ContainerStarted","Data":"e2f42dbf5703b54c567dd0ce0e375b082d91cbabc595269e2a8d11211c71a158"} Oct 14 13:33:47 crc kubenswrapper[4725]: I1014 13:33:47.770106 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 14 13:33:47 crc kubenswrapper[4725]: I1014 13:33:47.771907 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 14 13:33:47 crc kubenswrapper[4725]: I1014 13:33:47.822047 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7264-account-create-4skdt" Oct 14 13:33:47 crc kubenswrapper[4725]: I1014 13:33:47.835409 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 14 13:33:47 crc kubenswrapper[4725]: I1014 13:33:47.854233 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 14 13:33:47 crc kubenswrapper[4725]: I1014 13:33:47.907688 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 13:33:47 crc kubenswrapper[4725]: I1014 13:33:47.929840 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cpv2\" (UniqueName: \"kubernetes.io/projected/8c596e70-a030-4573-9cbe-3782c7c02fea-kube-api-access-4cpv2\") pod \"8c596e70-a030-4573-9cbe-3782c7c02fea\" (UID: \"8c596e70-a030-4573-9cbe-3782c7c02fea\") " Oct 14 13:33:47 crc kubenswrapper[4725]: I1014 13:33:47.953247 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c596e70-a030-4573-9cbe-3782c7c02fea-kube-api-access-4cpv2" (OuterVolumeSpecName: "kube-api-access-4cpv2") pod "8c596e70-a030-4573-9cbe-3782c7c02fea" (UID: "8c596e70-a030-4573-9cbe-3782c7c02fea"). InnerVolumeSpecName "kube-api-access-4cpv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:33:47 crc kubenswrapper[4725]: I1014 13:33:47.984886 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 14 13:33:47 crc kubenswrapper[4725]: I1014 13:33:47.984938 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 14 13:33:48 crc kubenswrapper[4725]: I1014 13:33:48.020014 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 14 13:33:48 crc kubenswrapper[4725]: I1014 13:33:48.030016 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 14 13:33:48 crc kubenswrapper[4725]: I1014 13:33:48.032072 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cpv2\" (UniqueName: \"kubernetes.io/projected/8c596e70-a030-4573-9cbe-3782c7c02fea-kube-api-access-4cpv2\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:48 crc kubenswrapper[4725]: I1014 13:33:48.448699 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7264-account-create-4skdt" event={"ID":"8c596e70-a030-4573-9cbe-3782c7c02fea","Type":"ContainerDied","Data":"def1327e366c94054813a5fcdbcd07ba6997518100e8a8f0736fb9aac0f65116"} Oct 14 13:33:48 crc kubenswrapper[4725]: I1014 13:33:48.449006 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="def1327e366c94054813a5fcdbcd07ba6997518100e8a8f0736fb9aac0f65116" Oct 14 13:33:48 crc kubenswrapper[4725]: I1014 13:33:48.448724 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7264-account-create-4skdt" Oct 14 13:33:48 crc kubenswrapper[4725]: I1014 13:33:48.450843 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b","Type":"ContainerStarted","Data":"3ee6e64a5fdbbe17c3797c534357d4064d7ddee8062225bdd5bfed0dca26ddc1"} Oct 14 13:33:48 crc kubenswrapper[4725]: I1014 13:33:48.451225 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 14 13:33:48 crc kubenswrapper[4725]: I1014 13:33:48.451276 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 14 13:33:48 crc kubenswrapper[4725]: I1014 13:33:48.451290 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 14 13:33:48 crc kubenswrapper[4725]: I1014 13:33:48.451302 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 14 13:33:49 crc kubenswrapper[4725]: I1014 13:33:49.463154 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b","Type":"ContainerStarted","Data":"7b73e107079d4f730c4f854f440fb77a8b4b0905a211514d08f5cb876b982ae5"} Oct 14 13:33:50 crc kubenswrapper[4725]: I1014 13:33:50.556817 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 14 13:33:50 crc kubenswrapper[4725]: I1014 13:33:50.557196 4725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 13:33:50 crc kubenswrapper[4725]: I1014 13:33:50.837738 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 14 13:33:50 crc kubenswrapper[4725]: I1014 13:33:50.837867 4725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 14 13:33:50 crc kubenswrapper[4725]: I1014 13:33:50.841994 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 14 13:33:50 crc kubenswrapper[4725]: I1014 13:33:50.845812 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 14 13:33:50 crc kubenswrapper[4725]: I1014 13:33:50.860534 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 14 13:33:52 crc kubenswrapper[4725]: I1014 13:33:52.496671 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b","Type":"ContainerStarted","Data":"0ae1e7a4651221ab1b8ba78cb6154510762e7804f8f7848d64f6b866f0c66343"} Oct 14 13:33:52 crc kubenswrapper[4725]: I1014 13:33:52.497218 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 14 13:33:52 crc kubenswrapper[4725]: I1014 13:33:52.496942 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b" containerName="proxy-httpd" containerID="cri-o://0ae1e7a4651221ab1b8ba78cb6154510762e7804f8f7848d64f6b866f0c66343" gracePeriod=30 Oct 14 13:33:52 crc kubenswrapper[4725]: I1014 13:33:52.496826 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b" containerName="ceilometer-central-agent" containerID="cri-o://e2f42dbf5703b54c567dd0ce0e375b082d91cbabc595269e2a8d11211c71a158" gracePeriod=30 Oct 14 13:33:52 crc kubenswrapper[4725]: I1014 13:33:52.496951 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b" containerName="ceilometer-notification-agent" containerID="cri-o://3ee6e64a5fdbbe17c3797c534357d4064d7ddee8062225bdd5bfed0dca26ddc1" gracePeriod=30 Oct 14 13:33:52 crc kubenswrapper[4725]: I1014 13:33:52.496943 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b" containerName="sg-core" containerID="cri-o://7b73e107079d4f730c4f854f440fb77a8b4b0905a211514d08f5cb876b982ae5" gracePeriod=30 Oct 14 13:33:53 crc kubenswrapper[4725]: I1014 13:33:53.508754 4725 generic.go:334] "Generic (PLEG): container finished" podID="f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b" containerID="0ae1e7a4651221ab1b8ba78cb6154510762e7804f8f7848d64f6b866f0c66343" exitCode=0 Oct 14 13:33:53 crc kubenswrapper[4725]: I1014 13:33:53.510804 4725 generic.go:334] "Generic (PLEG): container finished" podID="f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b" containerID="7b73e107079d4f730c4f854f440fb77a8b4b0905a211514d08f5cb876b982ae5" exitCode=2 Oct 14 13:33:53 crc kubenswrapper[4725]: I1014 13:33:53.510997 4725 generic.go:334] "Generic (PLEG): container finished" podID="f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b" containerID="3ee6e64a5fdbbe17c3797c534357d4064d7ddee8062225bdd5bfed0dca26ddc1" exitCode=0 Oct 14 13:33:53 crc kubenswrapper[4725]: I1014 13:33:53.508836 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b","Type":"ContainerDied","Data":"0ae1e7a4651221ab1b8ba78cb6154510762e7804f8f7848d64f6b866f0c66343"} Oct 14 13:33:53 crc kubenswrapper[4725]: I1014 13:33:53.511343 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b","Type":"ContainerDied","Data":"7b73e107079d4f730c4f854f440fb77a8b4b0905a211514d08f5cb876b982ae5"} Oct 14 13:33:53 crc kubenswrapper[4725]: I1014 13:33:53.511531 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b","Type":"ContainerDied","Data":"3ee6e64a5fdbbe17c3797c534357d4064d7ddee8062225bdd5bfed0dca26ddc1"} Oct 14 13:33:54 crc kubenswrapper[4725]: I1014 13:33:54.991001 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.415858243 podStartE2EDuration="10.990979654s" podCreationTimestamp="2025-10-14 13:33:44 +0000 UTC" firstStartedPulling="2025-10-14 13:33:45.791673125 +0000 UTC m=+1142.640107924" lastFinishedPulling="2025-10-14 13:33:51.366794526 +0000 UTC m=+1148.215229335" observedRunningTime="2025-10-14 13:33:52.531336762 +0000 UTC m=+1149.379771571" watchObservedRunningTime="2025-10-14 13:33:54.990979654 +0000 UTC m=+1151.839414463" Oct 14 13:33:54 crc kubenswrapper[4725]: I1014 13:33:54.992973 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-7fb6-account-create-xdc48"] Oct 14 13:33:54 crc kubenswrapper[4725]: E1014 13:33:54.993399 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="551500de-77a9-4f28-ab4b-b8259e04804b" containerName="horizon" Oct 14 13:33:54 crc kubenswrapper[4725]: I1014 13:33:54.993425 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="551500de-77a9-4f28-ab4b-b8259e04804b" containerName="horizon" Oct 14 13:33:54 crc kubenswrapper[4725]: E1014 13:33:54.993439 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c596e70-a030-4573-9cbe-3782c7c02fea" containerName="mariadb-account-create" Oct 14 13:33:54 crc kubenswrapper[4725]: I1014 13:33:54.993465 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c596e70-a030-4573-9cbe-3782c7c02fea" containerName="mariadb-account-create" Oct 14 13:33:54 crc kubenswrapper[4725]: E1014 13:33:54.993489 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="551500de-77a9-4f28-ab4b-b8259e04804b" containerName="horizon-log" Oct 14 13:33:54 crc kubenswrapper[4725]: I1014 13:33:54.993498 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="551500de-77a9-4f28-ab4b-b8259e04804b" containerName="horizon-log" Oct 14 13:33:54 crc kubenswrapper[4725]: I1014 13:33:54.993694 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="551500de-77a9-4f28-ab4b-b8259e04804b" containerName="horizon" Oct 14 13:33:54 crc kubenswrapper[4725]: I1014 13:33:54.993717 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c596e70-a030-4573-9cbe-3782c7c02fea" containerName="mariadb-account-create" Oct 14 13:33:54 crc kubenswrapper[4725]: I1014 13:33:54.993731 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="551500de-77a9-4f28-ab4b-b8259e04804b" containerName="horizon-log" Oct 14 13:33:54 crc kubenswrapper[4725]: I1014 13:33:54.994491 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7fb6-account-create-xdc48" Oct 14 13:33:54 crc kubenswrapper[4725]: I1014 13:33:54.996647 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Oct 14 13:33:55 crc kubenswrapper[4725]: I1014 13:33:55.006906 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7fb6-account-create-xdc48"] Oct 14 13:33:55 crc kubenswrapper[4725]: I1014 13:33:55.060291 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvkml\" (UniqueName: \"kubernetes.io/projected/f1eaa22a-4372-40ab-be9f-6e1c845407dc-kube-api-access-fvkml\") pod \"nova-cell0-7fb6-account-create-xdc48\" (UID: \"f1eaa22a-4372-40ab-be9f-6e1c845407dc\") " pod="openstack/nova-cell0-7fb6-account-create-xdc48" Oct 14 13:33:55 crc kubenswrapper[4725]: I1014 13:33:55.161630 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvkml\" (UniqueName: \"kubernetes.io/projected/f1eaa22a-4372-40ab-be9f-6e1c845407dc-kube-api-access-fvkml\") pod \"nova-cell0-7fb6-account-create-xdc48\" (UID: \"f1eaa22a-4372-40ab-be9f-6e1c845407dc\") " pod="openstack/nova-cell0-7fb6-account-create-xdc48" Oct 14 13:33:55 crc kubenswrapper[4725]: I1014 13:33:55.188929 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-6745-account-create-kchwn"] Oct 14 13:33:55 crc kubenswrapper[4725]: I1014 13:33:55.190027 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6745-account-create-kchwn" Oct 14 13:33:55 crc kubenswrapper[4725]: I1014 13:33:55.195808 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Oct 14 13:33:55 crc kubenswrapper[4725]: I1014 13:33:55.199043 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6745-account-create-kchwn"] Oct 14 13:33:55 crc kubenswrapper[4725]: I1014 13:33:55.202561 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvkml\" (UniqueName: \"kubernetes.io/projected/f1eaa22a-4372-40ab-be9f-6e1c845407dc-kube-api-access-fvkml\") pod \"nova-cell0-7fb6-account-create-xdc48\" (UID: \"f1eaa22a-4372-40ab-be9f-6e1c845407dc\") " pod="openstack/nova-cell0-7fb6-account-create-xdc48" Oct 14 13:33:55 crc kubenswrapper[4725]: I1014 13:33:55.263250 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smghv\" (UniqueName: \"kubernetes.io/projected/9e65b9ca-75fc-422b-bd39-55be57dd727f-kube-api-access-smghv\") pod \"nova-cell1-6745-account-create-kchwn\" (UID: \"9e65b9ca-75fc-422b-bd39-55be57dd727f\") " pod="openstack/nova-cell1-6745-account-create-kchwn" Oct 14 13:33:55 crc kubenswrapper[4725]: I1014 13:33:55.316580 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7fb6-account-create-xdc48" Oct 14 13:33:55 crc kubenswrapper[4725]: I1014 13:33:55.365133 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smghv\" (UniqueName: \"kubernetes.io/projected/9e65b9ca-75fc-422b-bd39-55be57dd727f-kube-api-access-smghv\") pod \"nova-cell1-6745-account-create-kchwn\" (UID: \"9e65b9ca-75fc-422b-bd39-55be57dd727f\") " pod="openstack/nova-cell1-6745-account-create-kchwn" Oct 14 13:33:55 crc kubenswrapper[4725]: I1014 13:33:55.391151 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smghv\" (UniqueName: \"kubernetes.io/projected/9e65b9ca-75fc-422b-bd39-55be57dd727f-kube-api-access-smghv\") pod \"nova-cell1-6745-account-create-kchwn\" (UID: \"9e65b9ca-75fc-422b-bd39-55be57dd727f\") " pod="openstack/nova-cell1-6745-account-create-kchwn" Oct 14 13:33:55 crc kubenswrapper[4725]: I1014 13:33:55.543102 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6745-account-create-kchwn" Oct 14 13:33:55 crc kubenswrapper[4725]: I1014 13:33:55.824268 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7fb6-account-create-xdc48"] Oct 14 13:33:55 crc kubenswrapper[4725]: W1014 13:33:55.997056 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e65b9ca_75fc_422b_bd39_55be57dd727f.slice/crio-629e90c7c86c9eefcd98771cb3f68e2e3734a0573dd9a28762d6015380a1c855 WatchSource:0}: Error finding container 629e90c7c86c9eefcd98771cb3f68e2e3734a0573dd9a28762d6015380a1c855: Status 404 returned error can't find the container with id 629e90c7c86c9eefcd98771cb3f68e2e3734a0573dd9a28762d6015380a1c855 Oct 14 13:33:56 crc kubenswrapper[4725]: I1014 13:33:56.000264 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-6745-account-create-kchwn"] Oct 14 13:33:56 crc kubenswrapper[4725]: I1014 13:33:56.541808 4725 generic.go:334] "Generic (PLEG): container finished" podID="f1eaa22a-4372-40ab-be9f-6e1c845407dc" containerID="65bc1a55c871dcd6600774fecec4a96d1809e8d6e18f399f9283898b8bea6736" exitCode=0 Oct 14 13:33:56 crc kubenswrapper[4725]: I1014 13:33:56.541977 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7fb6-account-create-xdc48" event={"ID":"f1eaa22a-4372-40ab-be9f-6e1c845407dc","Type":"ContainerDied","Data":"65bc1a55c871dcd6600774fecec4a96d1809e8d6e18f399f9283898b8bea6736"} Oct 14 13:33:56 crc kubenswrapper[4725]: I1014 13:33:56.542264 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7fb6-account-create-xdc48" event={"ID":"f1eaa22a-4372-40ab-be9f-6e1c845407dc","Type":"ContainerStarted","Data":"aca44b7c06ad6bba8450110e8da7c3c1ab653e038e0ca5e7c1ff825c6c0070b7"} Oct 14 13:33:56 crc kubenswrapper[4725]: I1014 13:33:56.544190 4725 generic.go:334] "Generic (PLEG): container finished" podID="9e65b9ca-75fc-422b-bd39-55be57dd727f" containerID="235ad7f53874c346b76f4fe0897a938719c662e7e12e0180127752edd4d93294" exitCode=0 Oct 14 13:33:56 crc kubenswrapper[4725]: I1014 13:33:56.544329 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6745-account-create-kchwn" event={"ID":"9e65b9ca-75fc-422b-bd39-55be57dd727f","Type":"ContainerDied","Data":"235ad7f53874c346b76f4fe0897a938719c662e7e12e0180127752edd4d93294"} Oct 14 13:33:56 crc kubenswrapper[4725]: I1014 13:33:56.544428 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6745-account-create-kchwn" event={"ID":"9e65b9ca-75fc-422b-bd39-55be57dd727f","Type":"ContainerStarted","Data":"629e90c7c86c9eefcd98771cb3f68e2e3734a0573dd9a28762d6015380a1c855"} Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.199433 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.341827 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb7fc\" (UniqueName: \"kubernetes.io/projected/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-kube-api-access-nb7fc\") pod \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\" (UID: \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\") " Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.342186 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-scripts\") pod \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\" (UID: \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\") " Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.342289 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-config-data\") pod \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\" (UID: \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\") " Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.342383 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-log-httpd\") pod \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\" (UID: \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\") " Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.342484 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-run-httpd\") pod \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\" (UID: \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\") " Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.342565 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-ceilometer-tls-certs\") pod \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\" (UID: \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\") " Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.342755 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-sg-core-conf-yaml\") pod \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\" (UID: \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\") " Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.342876 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-combined-ca-bundle\") pod \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\" (UID: \"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b\") " Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.342866 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b" (UID: "f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.342920 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b" (UID: "f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.351064 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-kube-api-access-nb7fc" (OuterVolumeSpecName: "kube-api-access-nb7fc") pod "f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b" (UID: "f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b"). InnerVolumeSpecName "kube-api-access-nb7fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.351875 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-scripts" (OuterVolumeSpecName: "scripts") pod "f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b" (UID: "f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.403615 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b" (UID: "f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.407830 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b" (UID: "f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.437311 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b" (UID: "f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.454319 4725 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.454367 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.454384 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb7fc\" (UniqueName: \"kubernetes.io/projected/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-kube-api-access-nb7fc\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.454407 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.454420 4725 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.454433 4725 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.454481 4725 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.473585 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-config-data" (OuterVolumeSpecName: "config-data") pod "f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b" (UID: "f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.553402 4725 generic.go:334] "Generic (PLEG): container finished" podID="f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b" containerID="e2f42dbf5703b54c567dd0ce0e375b082d91cbabc595269e2a8d11211c71a158" exitCode=0 Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.553493 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b","Type":"ContainerDied","Data":"e2f42dbf5703b54c567dd0ce0e375b082d91cbabc595269e2a8d11211c71a158"} Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.553518 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.553554 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b","Type":"ContainerDied","Data":"6d77dbb56b5857601d5150bfc464c2606ffbed8bf5af5a233661dc4a7a8aae53"} Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.553573 4725 scope.go:117] "RemoveContainer" containerID="0ae1e7a4651221ab1b8ba78cb6154510762e7804f8f7848d64f6b866f0c66343" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.557751 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.578053 4725 scope.go:117] "RemoveContainer" containerID="7b73e107079d4f730c4f854f440fb77a8b4b0905a211514d08f5cb876b982ae5" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.594917 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.610798 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.620485 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 14 13:33:57 crc kubenswrapper[4725]: E1014 13:33:57.621007 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b" containerName="proxy-httpd" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.621034 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b" containerName="proxy-httpd" Oct 14 13:33:57 crc kubenswrapper[4725]: E1014 13:33:57.621075 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b" containerName="sg-core" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.621085 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b" containerName="sg-core" Oct 14 13:33:57 crc kubenswrapper[4725]: E1014 13:33:57.621106 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b" containerName="ceilometer-central-agent" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.621115 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b" containerName="ceilometer-central-agent" Oct 14 13:33:57 crc kubenswrapper[4725]: E1014 13:33:57.621125 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b" containerName="ceilometer-notification-agent" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.621133 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b" containerName="ceilometer-notification-agent" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.621333 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b" containerName="ceilometer-central-agent" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.621370 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b" containerName="proxy-httpd" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.621384 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b" containerName="sg-core" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.621404 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b" containerName="ceilometer-notification-agent" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.625359 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.626394 4725 scope.go:117] "RemoveContainer" containerID="3ee6e64a5fdbbe17c3797c534357d4064d7ddee8062225bdd5bfed0dca26ddc1" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.631178 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.632062 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.632250 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.632378 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.672214 4725 scope.go:117] "RemoveContainer" containerID="e2f42dbf5703b54c567dd0ce0e375b082d91cbabc595269e2a8d11211c71a158" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.713819 4725 scope.go:117] "RemoveContainer" containerID="0ae1e7a4651221ab1b8ba78cb6154510762e7804f8f7848d64f6b866f0c66343" Oct 14 13:33:57 crc kubenswrapper[4725]: E1014 13:33:57.714823 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ae1e7a4651221ab1b8ba78cb6154510762e7804f8f7848d64f6b866f0c66343\": container with ID starting with 0ae1e7a4651221ab1b8ba78cb6154510762e7804f8f7848d64f6b866f0c66343 not found: ID does not exist" containerID="0ae1e7a4651221ab1b8ba78cb6154510762e7804f8f7848d64f6b866f0c66343" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.714874 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ae1e7a4651221ab1b8ba78cb6154510762e7804f8f7848d64f6b866f0c66343"} err="failed to get container status \"0ae1e7a4651221ab1b8ba78cb6154510762e7804f8f7848d64f6b866f0c66343\": rpc error: code = NotFound desc = could not find container \"0ae1e7a4651221ab1b8ba78cb6154510762e7804f8f7848d64f6b866f0c66343\": container with ID starting with 0ae1e7a4651221ab1b8ba78cb6154510762e7804f8f7848d64f6b866f0c66343 not found: ID does not exist" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.714904 4725 scope.go:117] "RemoveContainer" containerID="7b73e107079d4f730c4f854f440fb77a8b4b0905a211514d08f5cb876b982ae5" Oct 14 13:33:57 crc kubenswrapper[4725]: E1014 13:33:57.717248 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b73e107079d4f730c4f854f440fb77a8b4b0905a211514d08f5cb876b982ae5\": container with ID starting with 7b73e107079d4f730c4f854f440fb77a8b4b0905a211514d08f5cb876b982ae5 not found: ID does not exist" containerID="7b73e107079d4f730c4f854f440fb77a8b4b0905a211514d08f5cb876b982ae5" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.717296 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b73e107079d4f730c4f854f440fb77a8b4b0905a211514d08f5cb876b982ae5"} err="failed to get container status \"7b73e107079d4f730c4f854f440fb77a8b4b0905a211514d08f5cb876b982ae5\": rpc error: code = NotFound desc = could not find container \"7b73e107079d4f730c4f854f440fb77a8b4b0905a211514d08f5cb876b982ae5\": container with ID starting with 7b73e107079d4f730c4f854f440fb77a8b4b0905a211514d08f5cb876b982ae5 not found: ID does not exist" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.717322 4725 scope.go:117] "RemoveContainer" containerID="3ee6e64a5fdbbe17c3797c534357d4064d7ddee8062225bdd5bfed0dca26ddc1" Oct 14 13:33:57 crc kubenswrapper[4725]: E1014 13:33:57.718219 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ee6e64a5fdbbe17c3797c534357d4064d7ddee8062225bdd5bfed0dca26ddc1\": container with ID starting with 3ee6e64a5fdbbe17c3797c534357d4064d7ddee8062225bdd5bfed0dca26ddc1 not found: ID does not exist" containerID="3ee6e64a5fdbbe17c3797c534357d4064d7ddee8062225bdd5bfed0dca26ddc1" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.718247 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ee6e64a5fdbbe17c3797c534357d4064d7ddee8062225bdd5bfed0dca26ddc1"} err="failed to get container status \"3ee6e64a5fdbbe17c3797c534357d4064d7ddee8062225bdd5bfed0dca26ddc1\": rpc error: code = NotFound desc = could not find container \"3ee6e64a5fdbbe17c3797c534357d4064d7ddee8062225bdd5bfed0dca26ddc1\": container with ID starting with 3ee6e64a5fdbbe17c3797c534357d4064d7ddee8062225bdd5bfed0dca26ddc1 not found: ID does not exist" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.718260 4725 scope.go:117] "RemoveContainer" containerID="e2f42dbf5703b54c567dd0ce0e375b082d91cbabc595269e2a8d11211c71a158" Oct 14 13:33:57 crc kubenswrapper[4725]: E1014 13:33:57.718934 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2f42dbf5703b54c567dd0ce0e375b082d91cbabc595269e2a8d11211c71a158\": container with ID starting with e2f42dbf5703b54c567dd0ce0e375b082d91cbabc595269e2a8d11211c71a158 not found: ID does not exist" containerID="e2f42dbf5703b54c567dd0ce0e375b082d91cbabc595269e2a8d11211c71a158" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.718967 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2f42dbf5703b54c567dd0ce0e375b082d91cbabc595269e2a8d11211c71a158"} err="failed to get container status \"e2f42dbf5703b54c567dd0ce0e375b082d91cbabc595269e2a8d11211c71a158\": rpc error: code = NotFound desc = could not find container \"e2f42dbf5703b54c567dd0ce0e375b082d91cbabc595269e2a8d11211c71a158\": container with ID starting with e2f42dbf5703b54c567dd0ce0e375b082d91cbabc595269e2a8d11211c71a158 not found: ID does not exist" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.762499 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32dec6da-c9a2-490c-8b50-14e95259066b-scripts\") pod \"ceilometer-0\" (UID: \"32dec6da-c9a2-490c-8b50-14e95259066b\") " pod="openstack/ceilometer-0" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.762547 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32dec6da-c9a2-490c-8b50-14e95259066b-config-data\") pod \"ceilometer-0\" (UID: \"32dec6da-c9a2-490c-8b50-14e95259066b\") " pod="openstack/ceilometer-0" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.762680 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32dec6da-c9a2-490c-8b50-14e95259066b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"32dec6da-c9a2-490c-8b50-14e95259066b\") " pod="openstack/ceilometer-0" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.762718 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32dec6da-c9a2-490c-8b50-14e95259066b-log-httpd\") pod \"ceilometer-0\" (UID: \"32dec6da-c9a2-490c-8b50-14e95259066b\") " pod="openstack/ceilometer-0" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.765280 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32dec6da-c9a2-490c-8b50-14e95259066b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"32dec6da-c9a2-490c-8b50-14e95259066b\") " pod="openstack/ceilometer-0" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.766350 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/32dec6da-c9a2-490c-8b50-14e95259066b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"32dec6da-c9a2-490c-8b50-14e95259066b\") " pod="openstack/ceilometer-0" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.766396 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k74vv\" (UniqueName: \"kubernetes.io/projected/32dec6da-c9a2-490c-8b50-14e95259066b-kube-api-access-k74vv\") pod \"ceilometer-0\" (UID: \"32dec6da-c9a2-490c-8b50-14e95259066b\") " pod="openstack/ceilometer-0" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.766573 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32dec6da-c9a2-490c-8b50-14e95259066b-run-httpd\") pod \"ceilometer-0\" (UID: \"32dec6da-c9a2-490c-8b50-14e95259066b\") " pod="openstack/ceilometer-0" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.868608 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32dec6da-c9a2-490c-8b50-14e95259066b-log-httpd\") pod \"ceilometer-0\" (UID: \"32dec6da-c9a2-490c-8b50-14e95259066b\") " pod="openstack/ceilometer-0" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.868796 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32dec6da-c9a2-490c-8b50-14e95259066b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"32dec6da-c9a2-490c-8b50-14e95259066b\") " pod="openstack/ceilometer-0" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.868933 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/32dec6da-c9a2-490c-8b50-14e95259066b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"32dec6da-c9a2-490c-8b50-14e95259066b\") " pod="openstack/ceilometer-0" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.868960 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k74vv\" (UniqueName: \"kubernetes.io/projected/32dec6da-c9a2-490c-8b50-14e95259066b-kube-api-access-k74vv\") pod \"ceilometer-0\" (UID: \"32dec6da-c9a2-490c-8b50-14e95259066b\") " pod="openstack/ceilometer-0" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.869011 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32dec6da-c9a2-490c-8b50-14e95259066b-run-httpd\") pod \"ceilometer-0\" (UID: \"32dec6da-c9a2-490c-8b50-14e95259066b\") " pod="openstack/ceilometer-0" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.869061 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32dec6da-c9a2-490c-8b50-14e95259066b-scripts\") pod \"ceilometer-0\" (UID: \"32dec6da-c9a2-490c-8b50-14e95259066b\") " pod="openstack/ceilometer-0" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.869086 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32dec6da-c9a2-490c-8b50-14e95259066b-config-data\") pod \"ceilometer-0\" (UID: \"32dec6da-c9a2-490c-8b50-14e95259066b\") " pod="openstack/ceilometer-0" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.869143 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32dec6da-c9a2-490c-8b50-14e95259066b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"32dec6da-c9a2-490c-8b50-14e95259066b\") " pod="openstack/ceilometer-0" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.869487 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32dec6da-c9a2-490c-8b50-14e95259066b-log-httpd\") pod \"ceilometer-0\" (UID: \"32dec6da-c9a2-490c-8b50-14e95259066b\") " pod="openstack/ceilometer-0" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.873836 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32dec6da-c9a2-490c-8b50-14e95259066b-run-httpd\") pod \"ceilometer-0\" (UID: \"32dec6da-c9a2-490c-8b50-14e95259066b\") " pod="openstack/ceilometer-0" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.876136 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32dec6da-c9a2-490c-8b50-14e95259066b-scripts\") pod \"ceilometer-0\" (UID: \"32dec6da-c9a2-490c-8b50-14e95259066b\") " pod="openstack/ceilometer-0" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.878683 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32dec6da-c9a2-490c-8b50-14e95259066b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"32dec6da-c9a2-490c-8b50-14e95259066b\") " pod="openstack/ceilometer-0" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.884204 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/32dec6da-c9a2-490c-8b50-14e95259066b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"32dec6da-c9a2-490c-8b50-14e95259066b\") " pod="openstack/ceilometer-0" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.885120 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32dec6da-c9a2-490c-8b50-14e95259066b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"32dec6da-c9a2-490c-8b50-14e95259066b\") " pod="openstack/ceilometer-0" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.888488 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k74vv\" (UniqueName: \"kubernetes.io/projected/32dec6da-c9a2-490c-8b50-14e95259066b-kube-api-access-k74vv\") pod \"ceilometer-0\" (UID: \"32dec6da-c9a2-490c-8b50-14e95259066b\") " pod="openstack/ceilometer-0" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.889118 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32dec6da-c9a2-490c-8b50-14e95259066b-config-data\") pod \"ceilometer-0\" (UID: \"32dec6da-c9a2-490c-8b50-14e95259066b\") " pod="openstack/ceilometer-0" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.933145 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b" path="/var/lib/kubelet/pods/f3d3b9d9-7ab7-4e6b-8bd4-0f6be3a80a5b/volumes" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.934822 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6745-account-create-kchwn" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.949541 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7fb6-account-create-xdc48" Oct 14 13:33:57 crc kubenswrapper[4725]: I1014 13:33:57.981504 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 13:33:58 crc kubenswrapper[4725]: I1014 13:33:58.075753 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvkml\" (UniqueName: \"kubernetes.io/projected/f1eaa22a-4372-40ab-be9f-6e1c845407dc-kube-api-access-fvkml\") pod \"f1eaa22a-4372-40ab-be9f-6e1c845407dc\" (UID: \"f1eaa22a-4372-40ab-be9f-6e1c845407dc\") " Oct 14 13:33:58 crc kubenswrapper[4725]: I1014 13:33:58.075907 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smghv\" (UniqueName: \"kubernetes.io/projected/9e65b9ca-75fc-422b-bd39-55be57dd727f-kube-api-access-smghv\") pod \"9e65b9ca-75fc-422b-bd39-55be57dd727f\" (UID: \"9e65b9ca-75fc-422b-bd39-55be57dd727f\") " Oct 14 13:33:58 crc kubenswrapper[4725]: I1014 13:33:58.080166 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1eaa22a-4372-40ab-be9f-6e1c845407dc-kube-api-access-fvkml" (OuterVolumeSpecName: "kube-api-access-fvkml") pod "f1eaa22a-4372-40ab-be9f-6e1c845407dc" (UID: "f1eaa22a-4372-40ab-be9f-6e1c845407dc"). InnerVolumeSpecName "kube-api-access-fvkml". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:33:58 crc kubenswrapper[4725]: I1014 13:33:58.080840 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e65b9ca-75fc-422b-bd39-55be57dd727f-kube-api-access-smghv" (OuterVolumeSpecName: "kube-api-access-smghv") pod "9e65b9ca-75fc-422b-bd39-55be57dd727f" (UID: "9e65b9ca-75fc-422b-bd39-55be57dd727f"). InnerVolumeSpecName "kube-api-access-smghv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:33:58 crc kubenswrapper[4725]: I1014 13:33:58.178510 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvkml\" (UniqueName: \"kubernetes.io/projected/f1eaa22a-4372-40ab-be9f-6e1c845407dc-kube-api-access-fvkml\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:58 crc kubenswrapper[4725]: I1014 13:33:58.178534 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smghv\" (UniqueName: \"kubernetes.io/projected/9e65b9ca-75fc-422b-bd39-55be57dd727f-kube-api-access-smghv\") on node \"crc\" DevicePath \"\"" Oct 14 13:33:58 crc kubenswrapper[4725]: I1014 13:33:58.402405 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 13:33:58 crc kubenswrapper[4725]: I1014 13:33:58.564724 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32dec6da-c9a2-490c-8b50-14e95259066b","Type":"ContainerStarted","Data":"8529a8c4eb1961c7c4b1825316308676edc31cc003c00a1d4a7c28d3bd66ffd3"} Oct 14 13:33:58 crc kubenswrapper[4725]: I1014 13:33:58.566548 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7fb6-account-create-xdc48" event={"ID":"f1eaa22a-4372-40ab-be9f-6e1c845407dc","Type":"ContainerDied","Data":"aca44b7c06ad6bba8450110e8da7c3c1ab653e038e0ca5e7c1ff825c6c0070b7"} Oct 14 13:33:58 crc kubenswrapper[4725]: I1014 13:33:58.566572 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aca44b7c06ad6bba8450110e8da7c3c1ab653e038e0ca5e7c1ff825c6c0070b7" Oct 14 13:33:58 crc kubenswrapper[4725]: I1014 13:33:58.566644 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7fb6-account-create-xdc48" Oct 14 13:33:58 crc kubenswrapper[4725]: I1014 13:33:58.568049 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-6745-account-create-kchwn" event={"ID":"9e65b9ca-75fc-422b-bd39-55be57dd727f","Type":"ContainerDied","Data":"629e90c7c86c9eefcd98771cb3f68e2e3734a0573dd9a28762d6015380a1c855"} Oct 14 13:33:58 crc kubenswrapper[4725]: I1014 13:33:58.568085 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="629e90c7c86c9eefcd98771cb3f68e2e3734a0573dd9a28762d6015380a1c855" Oct 14 13:33:58 crc kubenswrapper[4725]: I1014 13:33:58.568259 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-6745-account-create-kchwn" Oct 14 13:33:59 crc kubenswrapper[4725]: I1014 13:33:59.579184 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32dec6da-c9a2-490c-8b50-14e95259066b","Type":"ContainerStarted","Data":"919957d22e96b2058366f667070783ef3829c109c09e419921da3d105ead154f"} Oct 14 13:34:00 crc kubenswrapper[4725]: I1014 13:34:00.213796 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mdw9d"] Oct 14 13:34:00 crc kubenswrapper[4725]: E1014 13:34:00.214604 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1eaa22a-4372-40ab-be9f-6e1c845407dc" containerName="mariadb-account-create" Oct 14 13:34:00 crc kubenswrapper[4725]: I1014 13:34:00.214627 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1eaa22a-4372-40ab-be9f-6e1c845407dc" containerName="mariadb-account-create" Oct 14 13:34:00 crc kubenswrapper[4725]: E1014 13:34:00.214658 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e65b9ca-75fc-422b-bd39-55be57dd727f" containerName="mariadb-account-create" Oct 14 13:34:00 crc kubenswrapper[4725]: I1014 13:34:00.214667 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e65b9ca-75fc-422b-bd39-55be57dd727f" containerName="mariadb-account-create" Oct 14 13:34:00 crc kubenswrapper[4725]: I1014 13:34:00.214856 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1eaa22a-4372-40ab-be9f-6e1c845407dc" containerName="mariadb-account-create" Oct 14 13:34:00 crc kubenswrapper[4725]: I1014 13:34:00.214895 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e65b9ca-75fc-422b-bd39-55be57dd727f" containerName="mariadb-account-create" Oct 14 13:34:00 crc kubenswrapper[4725]: I1014 13:34:00.215666 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mdw9d" Oct 14 13:34:00 crc kubenswrapper[4725]: I1014 13:34:00.220056 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nzm7b" Oct 14 13:34:00 crc kubenswrapper[4725]: I1014 13:34:00.220339 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 14 13:34:00 crc kubenswrapper[4725]: I1014 13:34:00.228755 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Oct 14 13:34:00 crc kubenswrapper[4725]: I1014 13:34:00.234611 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mdw9d"] Oct 14 13:34:00 crc kubenswrapper[4725]: I1014 13:34:00.319647 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71b4602e-fc28-44d1-986e-dad1133397c1-scripts\") pod \"nova-cell0-conductor-db-sync-mdw9d\" (UID: \"71b4602e-fc28-44d1-986e-dad1133397c1\") " pod="openstack/nova-cell0-conductor-db-sync-mdw9d" Oct 14 13:34:00 crc kubenswrapper[4725]: I1014 13:34:00.319758 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71b4602e-fc28-44d1-986e-dad1133397c1-config-data\") pod \"nova-cell0-conductor-db-sync-mdw9d\" (UID: \"71b4602e-fc28-44d1-986e-dad1133397c1\") " pod="openstack/nova-cell0-conductor-db-sync-mdw9d" Oct 14 13:34:00 crc kubenswrapper[4725]: I1014 13:34:00.319785 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b4602e-fc28-44d1-986e-dad1133397c1-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-mdw9d\" (UID: \"71b4602e-fc28-44d1-986e-dad1133397c1\") " pod="openstack/nova-cell0-conductor-db-sync-mdw9d" Oct 14 13:34:00 crc kubenswrapper[4725]: I1014 13:34:00.319807 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz56c\" (UniqueName: \"kubernetes.io/projected/71b4602e-fc28-44d1-986e-dad1133397c1-kube-api-access-wz56c\") pod \"nova-cell0-conductor-db-sync-mdw9d\" (UID: \"71b4602e-fc28-44d1-986e-dad1133397c1\") " pod="openstack/nova-cell0-conductor-db-sync-mdw9d" Oct 14 13:34:00 crc kubenswrapper[4725]: I1014 13:34:00.422538 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71b4602e-fc28-44d1-986e-dad1133397c1-scripts\") pod \"nova-cell0-conductor-db-sync-mdw9d\" (UID: \"71b4602e-fc28-44d1-986e-dad1133397c1\") " pod="openstack/nova-cell0-conductor-db-sync-mdw9d" Oct 14 13:34:00 crc kubenswrapper[4725]: I1014 13:34:00.422728 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71b4602e-fc28-44d1-986e-dad1133397c1-config-data\") pod \"nova-cell0-conductor-db-sync-mdw9d\" (UID: \"71b4602e-fc28-44d1-986e-dad1133397c1\") " pod="openstack/nova-cell0-conductor-db-sync-mdw9d" Oct 14 13:34:00 crc kubenswrapper[4725]: I1014 13:34:00.422779 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b4602e-fc28-44d1-986e-dad1133397c1-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-mdw9d\" (UID: \"71b4602e-fc28-44d1-986e-dad1133397c1\") " pod="openstack/nova-cell0-conductor-db-sync-mdw9d" Oct 14 13:34:00 crc kubenswrapper[4725]: I1014 13:34:00.422824 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz56c\" (UniqueName: \"kubernetes.io/projected/71b4602e-fc28-44d1-986e-dad1133397c1-kube-api-access-wz56c\") pod \"nova-cell0-conductor-db-sync-mdw9d\" (UID: \"71b4602e-fc28-44d1-986e-dad1133397c1\") " pod="openstack/nova-cell0-conductor-db-sync-mdw9d" Oct 14 13:34:00 crc kubenswrapper[4725]: I1014 13:34:00.428171 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71b4602e-fc28-44d1-986e-dad1133397c1-scripts\") pod \"nova-cell0-conductor-db-sync-mdw9d\" (UID: \"71b4602e-fc28-44d1-986e-dad1133397c1\") " pod="openstack/nova-cell0-conductor-db-sync-mdw9d" Oct 14 13:34:00 crc kubenswrapper[4725]: I1014 13:34:00.430165 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b4602e-fc28-44d1-986e-dad1133397c1-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-mdw9d\" (UID: \"71b4602e-fc28-44d1-986e-dad1133397c1\") " pod="openstack/nova-cell0-conductor-db-sync-mdw9d" Oct 14 13:34:00 crc kubenswrapper[4725]: I1014 13:34:00.439203 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71b4602e-fc28-44d1-986e-dad1133397c1-config-data\") pod \"nova-cell0-conductor-db-sync-mdw9d\" (UID: \"71b4602e-fc28-44d1-986e-dad1133397c1\") " pod="openstack/nova-cell0-conductor-db-sync-mdw9d" Oct 14 13:34:00 crc kubenswrapper[4725]: I1014 13:34:00.446153 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz56c\" (UniqueName: \"kubernetes.io/projected/71b4602e-fc28-44d1-986e-dad1133397c1-kube-api-access-wz56c\") pod \"nova-cell0-conductor-db-sync-mdw9d\" (UID: \"71b4602e-fc28-44d1-986e-dad1133397c1\") " pod="openstack/nova-cell0-conductor-db-sync-mdw9d" Oct 14 13:34:00 crc kubenswrapper[4725]: I1014 13:34:00.573193 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mdw9d" Oct 14 13:34:00 crc kubenswrapper[4725]: I1014 13:34:00.624825 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32dec6da-c9a2-490c-8b50-14e95259066b","Type":"ContainerStarted","Data":"05ff8112b4f19531b2b9e2569e78cef60207d08e5ea7372bec84911b6d14bbab"} Oct 14 13:34:01 crc kubenswrapper[4725]: I1014 13:34:01.121204 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mdw9d"] Oct 14 13:34:01 crc kubenswrapper[4725]: I1014 13:34:01.632941 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mdw9d" event={"ID":"71b4602e-fc28-44d1-986e-dad1133397c1","Type":"ContainerStarted","Data":"619e19e57165f36ed35a1fb8a6e6acc394ab992e42235a36565b3a6ddf0dc3b4"} Oct 14 13:34:01 crc kubenswrapper[4725]: I1014 13:34:01.635922 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32dec6da-c9a2-490c-8b50-14e95259066b","Type":"ContainerStarted","Data":"1ea02786c15b34ed405c1b92534d523e279490cd00c7199e4b8c7a9eaa2498d1"} Oct 14 13:34:02 crc kubenswrapper[4725]: I1014 13:34:02.520884 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 13:34:02 crc kubenswrapper[4725]: I1014 13:34:02.521199 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 13:34:02 crc kubenswrapper[4725]: I1014 13:34:02.646959 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32dec6da-c9a2-490c-8b50-14e95259066b","Type":"ContainerStarted","Data":"c86368d69a2defa52913b87b2284e72895e58adcf4c9a3c7481745237671412c"} Oct 14 13:34:02 crc kubenswrapper[4725]: I1014 13:34:02.647533 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 14 13:34:02 crc kubenswrapper[4725]: I1014 13:34:02.669892 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.026457927 podStartE2EDuration="5.669874546s" podCreationTimestamp="2025-10-14 13:33:57 +0000 UTC" firstStartedPulling="2025-10-14 13:33:58.411798288 +0000 UTC m=+1155.260233097" lastFinishedPulling="2025-10-14 13:34:02.055214907 +0000 UTC m=+1158.903649716" observedRunningTime="2025-10-14 13:34:02.663084708 +0000 UTC m=+1159.511519527" watchObservedRunningTime="2025-10-14 13:34:02.669874546 +0000 UTC m=+1159.518309345" Oct 14 13:34:08 crc kubenswrapper[4725]: I1014 13:34:08.696396 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mdw9d" event={"ID":"71b4602e-fc28-44d1-986e-dad1133397c1","Type":"ContainerStarted","Data":"7e401f768f83790b17644e75cdcd6d9420658ebfe8d9aa964ec1970420f3a4a5"} Oct 14 13:34:08 crc kubenswrapper[4725]: I1014 13:34:08.715915 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-mdw9d" podStartSLOduration=1.914562369 podStartE2EDuration="8.715895551s" podCreationTimestamp="2025-10-14 13:34:00 +0000 UTC" firstStartedPulling="2025-10-14 13:34:01.128729066 +0000 UTC m=+1157.977163875" lastFinishedPulling="2025-10-14 13:34:07.930062238 +0000 UTC m=+1164.778497057" observedRunningTime="2025-10-14 13:34:08.714481241 +0000 UTC m=+1165.562916050" watchObservedRunningTime="2025-10-14 13:34:08.715895551 +0000 UTC m=+1165.564330360" Oct 14 13:34:17 crc kubenswrapper[4725]: I1014 13:34:17.770723 4725 generic.go:334] "Generic (PLEG): container finished" podID="71b4602e-fc28-44d1-986e-dad1133397c1" containerID="7e401f768f83790b17644e75cdcd6d9420658ebfe8d9aa964ec1970420f3a4a5" exitCode=0 Oct 14 13:34:17 crc kubenswrapper[4725]: I1014 13:34:17.770829 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mdw9d" event={"ID":"71b4602e-fc28-44d1-986e-dad1133397c1","Type":"ContainerDied","Data":"7e401f768f83790b17644e75cdcd6d9420658ebfe8d9aa964ec1970420f3a4a5"} Oct 14 13:34:19 crc kubenswrapper[4725]: I1014 13:34:19.287883 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mdw9d" Oct 14 13:34:19 crc kubenswrapper[4725]: I1014 13:34:19.361432 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b4602e-fc28-44d1-986e-dad1133397c1-combined-ca-bundle\") pod \"71b4602e-fc28-44d1-986e-dad1133397c1\" (UID: \"71b4602e-fc28-44d1-986e-dad1133397c1\") " Oct 14 13:34:19 crc kubenswrapper[4725]: I1014 13:34:19.361601 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wz56c\" (UniqueName: \"kubernetes.io/projected/71b4602e-fc28-44d1-986e-dad1133397c1-kube-api-access-wz56c\") pod \"71b4602e-fc28-44d1-986e-dad1133397c1\" (UID: \"71b4602e-fc28-44d1-986e-dad1133397c1\") " Oct 14 13:34:19 crc kubenswrapper[4725]: I1014 13:34:19.361828 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71b4602e-fc28-44d1-986e-dad1133397c1-config-data\") pod \"71b4602e-fc28-44d1-986e-dad1133397c1\" (UID: \"71b4602e-fc28-44d1-986e-dad1133397c1\") " Oct 14 13:34:19 crc kubenswrapper[4725]: I1014 13:34:19.362113 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71b4602e-fc28-44d1-986e-dad1133397c1-scripts\") pod \"71b4602e-fc28-44d1-986e-dad1133397c1\" (UID: \"71b4602e-fc28-44d1-986e-dad1133397c1\") " Oct 14 13:34:19 crc kubenswrapper[4725]: I1014 13:34:19.366219 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71b4602e-fc28-44d1-986e-dad1133397c1-kube-api-access-wz56c" (OuterVolumeSpecName: "kube-api-access-wz56c") pod "71b4602e-fc28-44d1-986e-dad1133397c1" (UID: "71b4602e-fc28-44d1-986e-dad1133397c1"). InnerVolumeSpecName "kube-api-access-wz56c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:34:19 crc kubenswrapper[4725]: I1014 13:34:19.368163 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71b4602e-fc28-44d1-986e-dad1133397c1-scripts" (OuterVolumeSpecName: "scripts") pod "71b4602e-fc28-44d1-986e-dad1133397c1" (UID: "71b4602e-fc28-44d1-986e-dad1133397c1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:34:19 crc kubenswrapper[4725]: I1014 13:34:19.388234 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71b4602e-fc28-44d1-986e-dad1133397c1-config-data" (OuterVolumeSpecName: "config-data") pod "71b4602e-fc28-44d1-986e-dad1133397c1" (UID: "71b4602e-fc28-44d1-986e-dad1133397c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:34:19 crc kubenswrapper[4725]: I1014 13:34:19.388825 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71b4602e-fc28-44d1-986e-dad1133397c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71b4602e-fc28-44d1-986e-dad1133397c1" (UID: "71b4602e-fc28-44d1-986e-dad1133397c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:34:19 crc kubenswrapper[4725]: I1014 13:34:19.465191 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71b4602e-fc28-44d1-986e-dad1133397c1-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:19 crc kubenswrapper[4725]: I1014 13:34:19.465242 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71b4602e-fc28-44d1-986e-dad1133397c1-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:19 crc kubenswrapper[4725]: I1014 13:34:19.465261 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b4602e-fc28-44d1-986e-dad1133397c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:19 crc kubenswrapper[4725]: I1014 13:34:19.465280 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wz56c\" (UniqueName: \"kubernetes.io/projected/71b4602e-fc28-44d1-986e-dad1133397c1-kube-api-access-wz56c\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:19 crc kubenswrapper[4725]: I1014 13:34:19.804082 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mdw9d" event={"ID":"71b4602e-fc28-44d1-986e-dad1133397c1","Type":"ContainerDied","Data":"619e19e57165f36ed35a1fb8a6e6acc394ab992e42235a36565b3a6ddf0dc3b4"} Oct 14 13:34:19 crc kubenswrapper[4725]: I1014 13:34:19.804174 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="619e19e57165f36ed35a1fb8a6e6acc394ab992e42235a36565b3a6ddf0dc3b4" Oct 14 13:34:19 crc kubenswrapper[4725]: I1014 13:34:19.804241 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mdw9d" Oct 14 13:34:19 crc kubenswrapper[4725]: I1014 13:34:19.943281 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 14 13:34:19 crc kubenswrapper[4725]: E1014 13:34:19.943750 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71b4602e-fc28-44d1-986e-dad1133397c1" containerName="nova-cell0-conductor-db-sync" Oct 14 13:34:19 crc kubenswrapper[4725]: I1014 13:34:19.943769 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="71b4602e-fc28-44d1-986e-dad1133397c1" containerName="nova-cell0-conductor-db-sync" Oct 14 13:34:19 crc kubenswrapper[4725]: I1014 13:34:19.943944 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="71b4602e-fc28-44d1-986e-dad1133397c1" containerName="nova-cell0-conductor-db-sync" Oct 14 13:34:19 crc kubenswrapper[4725]: I1014 13:34:19.944604 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 14 13:34:19 crc kubenswrapper[4725]: I1014 13:34:19.947829 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 14 13:34:19 crc kubenswrapper[4725]: I1014 13:34:19.948346 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nzm7b" Oct 14 13:34:19 crc kubenswrapper[4725]: I1014 13:34:19.960981 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 14 13:34:20 crc kubenswrapper[4725]: I1014 13:34:20.074711 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ecc0eca-3a69-4d6c-be9c-a6af5c0fceff-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4ecc0eca-3a69-4d6c-be9c-a6af5c0fceff\") " pod="openstack/nova-cell0-conductor-0" Oct 14 13:34:20 crc kubenswrapper[4725]: I1014 13:34:20.074768 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ecc0eca-3a69-4d6c-be9c-a6af5c0fceff-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4ecc0eca-3a69-4d6c-be9c-a6af5c0fceff\") " pod="openstack/nova-cell0-conductor-0" Oct 14 13:34:20 crc kubenswrapper[4725]: I1014 13:34:20.075084 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvjbt\" (UniqueName: \"kubernetes.io/projected/4ecc0eca-3a69-4d6c-be9c-a6af5c0fceff-kube-api-access-tvjbt\") pod \"nova-cell0-conductor-0\" (UID: \"4ecc0eca-3a69-4d6c-be9c-a6af5c0fceff\") " pod="openstack/nova-cell0-conductor-0" Oct 14 13:34:20 crc kubenswrapper[4725]: I1014 13:34:20.176954 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvjbt\" (UniqueName: \"kubernetes.io/projected/4ecc0eca-3a69-4d6c-be9c-a6af5c0fceff-kube-api-access-tvjbt\") pod \"nova-cell0-conductor-0\" (UID: \"4ecc0eca-3a69-4d6c-be9c-a6af5c0fceff\") " pod="openstack/nova-cell0-conductor-0" Oct 14 13:34:20 crc kubenswrapper[4725]: I1014 13:34:20.177077 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ecc0eca-3a69-4d6c-be9c-a6af5c0fceff-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4ecc0eca-3a69-4d6c-be9c-a6af5c0fceff\") " pod="openstack/nova-cell0-conductor-0" Oct 14 13:34:20 crc kubenswrapper[4725]: I1014 13:34:20.177120 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ecc0eca-3a69-4d6c-be9c-a6af5c0fceff-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4ecc0eca-3a69-4d6c-be9c-a6af5c0fceff\") " pod="openstack/nova-cell0-conductor-0" Oct 14 13:34:20 crc kubenswrapper[4725]: I1014 13:34:20.180854 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ecc0eca-3a69-4d6c-be9c-a6af5c0fceff-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4ecc0eca-3a69-4d6c-be9c-a6af5c0fceff\") " pod="openstack/nova-cell0-conductor-0" Oct 14 13:34:20 crc kubenswrapper[4725]: I1014 13:34:20.187037 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ecc0eca-3a69-4d6c-be9c-a6af5c0fceff-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4ecc0eca-3a69-4d6c-be9c-a6af5c0fceff\") " pod="openstack/nova-cell0-conductor-0" Oct 14 13:34:20 crc kubenswrapper[4725]: I1014 13:34:20.195286 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvjbt\" (UniqueName: \"kubernetes.io/projected/4ecc0eca-3a69-4d6c-be9c-a6af5c0fceff-kube-api-access-tvjbt\") pod \"nova-cell0-conductor-0\" (UID: \"4ecc0eca-3a69-4d6c-be9c-a6af5c0fceff\") " pod="openstack/nova-cell0-conductor-0" Oct 14 13:34:20 crc kubenswrapper[4725]: I1014 13:34:20.262651 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 14 13:34:20 crc kubenswrapper[4725]: I1014 13:34:20.718068 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 14 13:34:20 crc kubenswrapper[4725]: I1014 13:34:20.814268 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"4ecc0eca-3a69-4d6c-be9c-a6af5c0fceff","Type":"ContainerStarted","Data":"1839ddbf14657f09d15609851dfc4fc58555080efc0b36f327135ae56f68cc25"} Oct 14 13:34:21 crc kubenswrapper[4725]: I1014 13:34:21.826738 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"4ecc0eca-3a69-4d6c-be9c-a6af5c0fceff","Type":"ContainerStarted","Data":"6b368b074b7336ab05a54fb119435e34082b46a0ba7ee79cfbefd87a5f63c2a6"} Oct 14 13:34:21 crc kubenswrapper[4725]: I1014 13:34:21.827518 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Oct 14 13:34:21 crc kubenswrapper[4725]: I1014 13:34:21.851107 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.851087489 podStartE2EDuration="2.851087489s" podCreationTimestamp="2025-10-14 13:34:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:34:21.844877257 +0000 UTC m=+1178.693312096" watchObservedRunningTime="2025-10-14 13:34:21.851087489 +0000 UTC m=+1178.699522298" Oct 14 13:34:25 crc kubenswrapper[4725]: I1014 13:34:25.298540 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Oct 14 13:34:25 crc kubenswrapper[4725]: I1014 13:34:25.760845 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-8nbmk"] Oct 14 13:34:25 crc kubenswrapper[4725]: I1014 13:34:25.762170 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8nbmk" Oct 14 13:34:25 crc kubenswrapper[4725]: I1014 13:34:25.764622 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Oct 14 13:34:25 crc kubenswrapper[4725]: I1014 13:34:25.764678 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Oct 14 13:34:25 crc kubenswrapper[4725]: I1014 13:34:25.768943 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8nbmk"] Oct 14 13:34:25 crc kubenswrapper[4725]: I1014 13:34:25.894677 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83d1426f-ea53-45ba-a397-e03f35b28711-config-data\") pod \"nova-cell0-cell-mapping-8nbmk\" (UID: \"83d1426f-ea53-45ba-a397-e03f35b28711\") " pod="openstack/nova-cell0-cell-mapping-8nbmk" Oct 14 13:34:25 crc kubenswrapper[4725]: I1014 13:34:25.894796 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn2n7\" (UniqueName: \"kubernetes.io/projected/83d1426f-ea53-45ba-a397-e03f35b28711-kube-api-access-gn2n7\") pod \"nova-cell0-cell-mapping-8nbmk\" (UID: \"83d1426f-ea53-45ba-a397-e03f35b28711\") " pod="openstack/nova-cell0-cell-mapping-8nbmk" Oct 14 13:34:25 crc kubenswrapper[4725]: I1014 13:34:25.894868 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d1426f-ea53-45ba-a397-e03f35b28711-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8nbmk\" (UID: \"83d1426f-ea53-45ba-a397-e03f35b28711\") " pod="openstack/nova-cell0-cell-mapping-8nbmk" Oct 14 13:34:25 crc kubenswrapper[4725]: I1014 13:34:25.894909 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83d1426f-ea53-45ba-a397-e03f35b28711-scripts\") pod \"nova-cell0-cell-mapping-8nbmk\" (UID: \"83d1426f-ea53-45ba-a397-e03f35b28711\") " pod="openstack/nova-cell0-cell-mapping-8nbmk" Oct 14 13:34:25 crc kubenswrapper[4725]: I1014 13:34:25.943002 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 13:34:25 crc kubenswrapper[4725]: I1014 13:34:25.944540 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 13:34:25 crc kubenswrapper[4725]: I1014 13:34:25.950855 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 14 13:34:25 crc kubenswrapper[4725]: I1014 13:34:25.959669 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 13:34:25 crc kubenswrapper[4725]: I1014 13:34:25.977019 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 13:34:25 crc kubenswrapper[4725]: I1014 13:34:25.978131 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:34:25 crc kubenswrapper[4725]: I1014 13:34:25.981483 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 14 13:34:25 crc kubenswrapper[4725]: I1014 13:34:25.997784 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee779039-892b-461d-aea0-1139654037bf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ee779039-892b-461d-aea0-1139654037bf\") " pod="openstack/nova-scheduler-0" Oct 14 13:34:25 crc kubenswrapper[4725]: I1014 13:34:25.998241 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83d1426f-ea53-45ba-a397-e03f35b28711-config-data\") pod \"nova-cell0-cell-mapping-8nbmk\" (UID: \"83d1426f-ea53-45ba-a397-e03f35b28711\") " pod="openstack/nova-cell0-cell-mapping-8nbmk" Oct 14 13:34:25 crc kubenswrapper[4725]: I1014 13:34:25.998423 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee779039-892b-461d-aea0-1139654037bf-config-data\") pod \"nova-scheduler-0\" (UID: \"ee779039-892b-461d-aea0-1139654037bf\") " pod="openstack/nova-scheduler-0" Oct 14 13:34:25 crc kubenswrapper[4725]: I1014 13:34:25.998621 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn2n7\" (UniqueName: \"kubernetes.io/projected/83d1426f-ea53-45ba-a397-e03f35b28711-kube-api-access-gn2n7\") pod \"nova-cell0-cell-mapping-8nbmk\" (UID: \"83d1426f-ea53-45ba-a397-e03f35b28711\") " pod="openstack/nova-cell0-cell-mapping-8nbmk" Oct 14 13:34:25 crc kubenswrapper[4725]: I1014 13:34:25.999437 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d1426f-ea53-45ba-a397-e03f35b28711-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8nbmk\" (UID: \"83d1426f-ea53-45ba-a397-e03f35b28711\") " pod="openstack/nova-cell0-cell-mapping-8nbmk" Oct 14 13:34:25 crc kubenswrapper[4725]: I1014 13:34:25.999585 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83d1426f-ea53-45ba-a397-e03f35b28711-scripts\") pod \"nova-cell0-cell-mapping-8nbmk\" (UID: \"83d1426f-ea53-45ba-a397-e03f35b28711\") " pod="openstack/nova-cell0-cell-mapping-8nbmk" Oct 14 13:34:25 crc kubenswrapper[4725]: I1014 13:34:25.999723 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2nrb\" (UniqueName: \"kubernetes.io/projected/ee779039-892b-461d-aea0-1139654037bf-kube-api-access-x2nrb\") pod \"nova-scheduler-0\" (UID: \"ee779039-892b-461d-aea0-1139654037bf\") " pod="openstack/nova-scheduler-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.015509 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d1426f-ea53-45ba-a397-e03f35b28711-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8nbmk\" (UID: \"83d1426f-ea53-45ba-a397-e03f35b28711\") " pod="openstack/nova-cell0-cell-mapping-8nbmk" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.015888 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83d1426f-ea53-45ba-a397-e03f35b28711-config-data\") pod \"nova-cell0-cell-mapping-8nbmk\" (UID: \"83d1426f-ea53-45ba-a397-e03f35b28711\") " pod="openstack/nova-cell0-cell-mapping-8nbmk" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.027166 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83d1426f-ea53-45ba-a397-e03f35b28711-scripts\") pod \"nova-cell0-cell-mapping-8nbmk\" (UID: \"83d1426f-ea53-45ba-a397-e03f35b28711\") " pod="openstack/nova-cell0-cell-mapping-8nbmk" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.040874 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn2n7\" (UniqueName: \"kubernetes.io/projected/83d1426f-ea53-45ba-a397-e03f35b28711-kube-api-access-gn2n7\") pod \"nova-cell0-cell-mapping-8nbmk\" (UID: \"83d1426f-ea53-45ba-a397-e03f35b28711\") " pod="openstack/nova-cell0-cell-mapping-8nbmk" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.053498 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.090195 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.091741 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.094706 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.100975 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d602b0f-b981-4160-8ceb-cf6509cb34b6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0d602b0f-b981-4160-8ceb-cf6509cb34b6\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.101041 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2nrb\" (UniqueName: \"kubernetes.io/projected/ee779039-892b-461d-aea0-1139654037bf-kube-api-access-x2nrb\") pod \"nova-scheduler-0\" (UID: \"ee779039-892b-461d-aea0-1139654037bf\") " pod="openstack/nova-scheduler-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.101072 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee779039-892b-461d-aea0-1139654037bf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ee779039-892b-461d-aea0-1139654037bf\") " pod="openstack/nova-scheduler-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.101108 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee779039-892b-461d-aea0-1139654037bf-config-data\") pod \"nova-scheduler-0\" (UID: \"ee779039-892b-461d-aea0-1139654037bf\") " pod="openstack/nova-scheduler-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.101124 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4zdl\" (UniqueName: \"kubernetes.io/projected/0d602b0f-b981-4160-8ceb-cf6509cb34b6-kube-api-access-p4zdl\") pod \"nova-cell1-novncproxy-0\" (UID: \"0d602b0f-b981-4160-8ceb-cf6509cb34b6\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.101156 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d602b0f-b981-4160-8ceb-cf6509cb34b6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0d602b0f-b981-4160-8ceb-cf6509cb34b6\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.105086 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8nbmk" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.116320 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.124791 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee779039-892b-461d-aea0-1139654037bf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ee779039-892b-461d-aea0-1139654037bf\") " pod="openstack/nova-scheduler-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.125577 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2nrb\" (UniqueName: \"kubernetes.io/projected/ee779039-892b-461d-aea0-1139654037bf-kube-api-access-x2nrb\") pod \"nova-scheduler-0\" (UID: \"ee779039-892b-461d-aea0-1139654037bf\") " pod="openstack/nova-scheduler-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.136022 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee779039-892b-461d-aea0-1139654037bf-config-data\") pod \"nova-scheduler-0\" (UID: \"ee779039-892b-461d-aea0-1139654037bf\") " pod="openstack/nova-scheduler-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.185885 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.187402 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.196676 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.204392 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d602b0f-b981-4160-8ceb-cf6509cb34b6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0d602b0f-b981-4160-8ceb-cf6509cb34b6\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.204489 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9rtz\" (UniqueName: \"kubernetes.io/projected/741c0ed3-ec3b-4c57-92d1-eb42974f5383-kube-api-access-v9rtz\") pod \"nova-api-0\" (UID: \"741c0ed3-ec3b-4c57-92d1-eb42974f5383\") " pod="openstack/nova-api-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.204568 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/741c0ed3-ec3b-4c57-92d1-eb42974f5383-logs\") pod \"nova-api-0\" (UID: \"741c0ed3-ec3b-4c57-92d1-eb42974f5383\") " pod="openstack/nova-api-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.206197 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4zdl\" (UniqueName: \"kubernetes.io/projected/0d602b0f-b981-4160-8ceb-cf6509cb34b6-kube-api-access-p4zdl\") pod \"nova-cell1-novncproxy-0\" (UID: \"0d602b0f-b981-4160-8ceb-cf6509cb34b6\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.206253 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/741c0ed3-ec3b-4c57-92d1-eb42974f5383-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"741c0ed3-ec3b-4c57-92d1-eb42974f5383\") " pod="openstack/nova-api-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.206334 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d602b0f-b981-4160-8ceb-cf6509cb34b6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0d602b0f-b981-4160-8ceb-cf6509cb34b6\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.206437 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/741c0ed3-ec3b-4c57-92d1-eb42974f5383-config-data\") pod \"nova-api-0\" (UID: \"741c0ed3-ec3b-4c57-92d1-eb42974f5383\") " pod="openstack/nova-api-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.210623 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d602b0f-b981-4160-8ceb-cf6509cb34b6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0d602b0f-b981-4160-8ceb-cf6509cb34b6\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.214951 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d602b0f-b981-4160-8ceb-cf6509cb34b6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0d602b0f-b981-4160-8ceb-cf6509cb34b6\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.222047 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.260013 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4zdl\" (UniqueName: \"kubernetes.io/projected/0d602b0f-b981-4160-8ceb-cf6509cb34b6-kube-api-access-p4zdl\") pod \"nova-cell1-novncproxy-0\" (UID: \"0d602b0f-b981-4160-8ceb-cf6509cb34b6\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.262285 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-6ssb2"] Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.263765 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-6ssb2" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.276382 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-6ssb2"] Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.288043 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.297976 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.308637 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6827455-b45f-478e-84ce-7f4c62c456a0-logs\") pod \"nova-metadata-0\" (UID: \"c6827455-b45f-478e-84ce-7f4c62c456a0\") " pod="openstack/nova-metadata-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.308708 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9rtz\" (UniqueName: \"kubernetes.io/projected/741c0ed3-ec3b-4c57-92d1-eb42974f5383-kube-api-access-v9rtz\") pod \"nova-api-0\" (UID: \"741c0ed3-ec3b-4c57-92d1-eb42974f5383\") " pod="openstack/nova-api-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.308808 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/741c0ed3-ec3b-4c57-92d1-eb42974f5383-logs\") pod \"nova-api-0\" (UID: \"741c0ed3-ec3b-4c57-92d1-eb42974f5383\") " pod="openstack/nova-api-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.308839 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-6ssb2\" (UID: \"7c0e9f2f-675f-4836-9e5a-27529c6fecb4\") " pod="openstack/dnsmasq-dns-845d6d6f59-6ssb2" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.308870 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkpm7\" (UniqueName: \"kubernetes.io/projected/c6827455-b45f-478e-84ce-7f4c62c456a0-kube-api-access-dkpm7\") pod \"nova-metadata-0\" (UID: \"c6827455-b45f-478e-84ce-7f4c62c456a0\") " pod="openstack/nova-metadata-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.308897 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6827455-b45f-478e-84ce-7f4c62c456a0-config-data\") pod \"nova-metadata-0\" (UID: \"c6827455-b45f-478e-84ce-7f4c62c456a0\") " pod="openstack/nova-metadata-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.308927 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/741c0ed3-ec3b-4c57-92d1-eb42974f5383-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"741c0ed3-ec3b-4c57-92d1-eb42974f5383\") " pod="openstack/nova-api-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.308952 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-6ssb2\" (UID: \"7c0e9f2f-675f-4836-9e5a-27529c6fecb4\") " pod="openstack/dnsmasq-dns-845d6d6f59-6ssb2" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.308978 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-6ssb2\" (UID: \"7c0e9f2f-675f-4836-9e5a-27529c6fecb4\") " pod="openstack/dnsmasq-dns-845d6d6f59-6ssb2" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.308999 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-config\") pod \"dnsmasq-dns-845d6d6f59-6ssb2\" (UID: \"7c0e9f2f-675f-4836-9e5a-27529c6fecb4\") " pod="openstack/dnsmasq-dns-845d6d6f59-6ssb2" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.309022 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6827455-b45f-478e-84ce-7f4c62c456a0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c6827455-b45f-478e-84ce-7f4c62c456a0\") " pod="openstack/nova-metadata-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.309829 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/741c0ed3-ec3b-4c57-92d1-eb42974f5383-logs\") pod \"nova-api-0\" (UID: \"741c0ed3-ec3b-4c57-92d1-eb42974f5383\") " pod="openstack/nova-api-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.312022 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-6ssb2\" (UID: \"7c0e9f2f-675f-4836-9e5a-27529c6fecb4\") " pod="openstack/dnsmasq-dns-845d6d6f59-6ssb2" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.312053 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/741c0ed3-ec3b-4c57-92d1-eb42974f5383-config-data\") pod \"nova-api-0\" (UID: \"741c0ed3-ec3b-4c57-92d1-eb42974f5383\") " pod="openstack/nova-api-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.312102 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvtfv\" (UniqueName: \"kubernetes.io/projected/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-kube-api-access-mvtfv\") pod \"dnsmasq-dns-845d6d6f59-6ssb2\" (UID: \"7c0e9f2f-675f-4836-9e5a-27529c6fecb4\") " pod="openstack/dnsmasq-dns-845d6d6f59-6ssb2" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.314748 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/741c0ed3-ec3b-4c57-92d1-eb42974f5383-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"741c0ed3-ec3b-4c57-92d1-eb42974f5383\") " pod="openstack/nova-api-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.316868 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/741c0ed3-ec3b-4c57-92d1-eb42974f5383-config-data\") pod \"nova-api-0\" (UID: \"741c0ed3-ec3b-4c57-92d1-eb42974f5383\") " pod="openstack/nova-api-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.339487 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9rtz\" (UniqueName: \"kubernetes.io/projected/741c0ed3-ec3b-4c57-92d1-eb42974f5383-kube-api-access-v9rtz\") pod \"nova-api-0\" (UID: \"741c0ed3-ec3b-4c57-92d1-eb42974f5383\") " pod="openstack/nova-api-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.414321 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-6ssb2\" (UID: \"7c0e9f2f-675f-4836-9e5a-27529c6fecb4\") " pod="openstack/dnsmasq-dns-845d6d6f59-6ssb2" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.414371 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkpm7\" (UniqueName: \"kubernetes.io/projected/c6827455-b45f-478e-84ce-7f4c62c456a0-kube-api-access-dkpm7\") pod \"nova-metadata-0\" (UID: \"c6827455-b45f-478e-84ce-7f4c62c456a0\") " pod="openstack/nova-metadata-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.415670 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6827455-b45f-478e-84ce-7f4c62c456a0-config-data\") pod \"nova-metadata-0\" (UID: \"c6827455-b45f-478e-84ce-7f4c62c456a0\") " pod="openstack/nova-metadata-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.415743 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-6ssb2\" (UID: \"7c0e9f2f-675f-4836-9e5a-27529c6fecb4\") " pod="openstack/dnsmasq-dns-845d6d6f59-6ssb2" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.415781 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-6ssb2\" (UID: \"7c0e9f2f-675f-4836-9e5a-27529c6fecb4\") " pod="openstack/dnsmasq-dns-845d6d6f59-6ssb2" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.415806 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-config\") pod \"dnsmasq-dns-845d6d6f59-6ssb2\" (UID: \"7c0e9f2f-675f-4836-9e5a-27529c6fecb4\") " pod="openstack/dnsmasq-dns-845d6d6f59-6ssb2" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.415839 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6827455-b45f-478e-84ce-7f4c62c456a0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c6827455-b45f-478e-84ce-7f4c62c456a0\") " pod="openstack/nova-metadata-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.415876 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-6ssb2\" (UID: \"7c0e9f2f-675f-4836-9e5a-27529c6fecb4\") " pod="openstack/dnsmasq-dns-845d6d6f59-6ssb2" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.415921 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvtfv\" (UniqueName: \"kubernetes.io/projected/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-kube-api-access-mvtfv\") pod \"dnsmasq-dns-845d6d6f59-6ssb2\" (UID: \"7c0e9f2f-675f-4836-9e5a-27529c6fecb4\") " pod="openstack/dnsmasq-dns-845d6d6f59-6ssb2" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.415959 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6827455-b45f-478e-84ce-7f4c62c456a0-logs\") pod \"nova-metadata-0\" (UID: \"c6827455-b45f-478e-84ce-7f4c62c456a0\") " pod="openstack/nova-metadata-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.417112 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-6ssb2\" (UID: \"7c0e9f2f-675f-4836-9e5a-27529c6fecb4\") " pod="openstack/dnsmasq-dns-845d6d6f59-6ssb2" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.417542 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-config\") pod \"dnsmasq-dns-845d6d6f59-6ssb2\" (UID: \"7c0e9f2f-675f-4836-9e5a-27529c6fecb4\") " pod="openstack/dnsmasq-dns-845d6d6f59-6ssb2" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.417660 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-6ssb2\" (UID: \"7c0e9f2f-675f-4836-9e5a-27529c6fecb4\") " pod="openstack/dnsmasq-dns-845d6d6f59-6ssb2" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.418265 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6827455-b45f-478e-84ce-7f4c62c456a0-logs\") pod \"nova-metadata-0\" (UID: \"c6827455-b45f-478e-84ce-7f4c62c456a0\") " pod="openstack/nova-metadata-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.418342 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-6ssb2\" (UID: \"7c0e9f2f-675f-4836-9e5a-27529c6fecb4\") " pod="openstack/dnsmasq-dns-845d6d6f59-6ssb2" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.418356 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-6ssb2\" (UID: \"7c0e9f2f-675f-4836-9e5a-27529c6fecb4\") " pod="openstack/dnsmasq-dns-845d6d6f59-6ssb2" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.427018 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6827455-b45f-478e-84ce-7f4c62c456a0-config-data\") pod \"nova-metadata-0\" (UID: \"c6827455-b45f-478e-84ce-7f4c62c456a0\") " pod="openstack/nova-metadata-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.435553 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkpm7\" (UniqueName: \"kubernetes.io/projected/c6827455-b45f-478e-84ce-7f4c62c456a0-kube-api-access-dkpm7\") pod \"nova-metadata-0\" (UID: \"c6827455-b45f-478e-84ce-7f4c62c456a0\") " pod="openstack/nova-metadata-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.436042 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6827455-b45f-478e-84ce-7f4c62c456a0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c6827455-b45f-478e-84ce-7f4c62c456a0\") " pod="openstack/nova-metadata-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.445090 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvtfv\" (UniqueName: \"kubernetes.io/projected/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-kube-api-access-mvtfv\") pod \"dnsmasq-dns-845d6d6f59-6ssb2\" (UID: \"7c0e9f2f-675f-4836-9e5a-27529c6fecb4\") " pod="openstack/dnsmasq-dns-845d6d6f59-6ssb2" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.553923 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.574904 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.600865 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-6ssb2" Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.757473 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8nbmk"] Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.912534 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8nbmk" event={"ID":"83d1426f-ea53-45ba-a397-e03f35b28711","Type":"ContainerStarted","Data":"6cbc88f15e5c2592743b296013f0fd13399cb7f697fed76bb7fe05adabfd2d23"} Oct 14 13:34:26 crc kubenswrapper[4725]: I1014 13:34:26.950815 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 13:34:27 crc kubenswrapper[4725]: I1014 13:34:27.074947 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 13:34:27 crc kubenswrapper[4725]: I1014 13:34:27.129537 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dgd9g"] Oct 14 13:34:27 crc kubenswrapper[4725]: I1014 13:34:27.142885 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dgd9g"] Oct 14 13:34:27 crc kubenswrapper[4725]: I1014 13:34:27.143005 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-dgd9g" Oct 14 13:34:27 crc kubenswrapper[4725]: I1014 13:34:27.145267 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Oct 14 13:34:27 crc kubenswrapper[4725]: I1014 13:34:27.145315 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 14 13:34:27 crc kubenswrapper[4725]: I1014 13:34:27.240291 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 13:34:27 crc kubenswrapper[4725]: I1014 13:34:27.243167 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa8d069-e628-42b1-a2c3-f099375ffff3-config-data\") pod \"nova-cell1-conductor-db-sync-dgd9g\" (UID: \"bfa8d069-e628-42b1-a2c3-f099375ffff3\") " pod="openstack/nova-cell1-conductor-db-sync-dgd9g" Oct 14 13:34:27 crc kubenswrapper[4725]: I1014 13:34:27.243538 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfa8d069-e628-42b1-a2c3-f099375ffff3-scripts\") pod \"nova-cell1-conductor-db-sync-dgd9g\" (UID: \"bfa8d069-e628-42b1-a2c3-f099375ffff3\") " pod="openstack/nova-cell1-conductor-db-sync-dgd9g" Oct 14 13:34:27 crc kubenswrapper[4725]: I1014 13:34:27.243606 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa8d069-e628-42b1-a2c3-f099375ffff3-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-dgd9g\" (UID: \"bfa8d069-e628-42b1-a2c3-f099375ffff3\") " pod="openstack/nova-cell1-conductor-db-sync-dgd9g" Oct 14 13:34:27 crc kubenswrapper[4725]: I1014 13:34:27.243659 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qxmp\" (UniqueName: \"kubernetes.io/projected/bfa8d069-e628-42b1-a2c3-f099375ffff3-kube-api-access-9qxmp\") pod \"nova-cell1-conductor-db-sync-dgd9g\" (UID: \"bfa8d069-e628-42b1-a2c3-f099375ffff3\") " pod="openstack/nova-cell1-conductor-db-sync-dgd9g" Oct 14 13:34:27 crc kubenswrapper[4725]: I1014 13:34:27.250627 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-6ssb2"] Oct 14 13:34:27 crc kubenswrapper[4725]: I1014 13:34:27.348754 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfa8d069-e628-42b1-a2c3-f099375ffff3-scripts\") pod \"nova-cell1-conductor-db-sync-dgd9g\" (UID: \"bfa8d069-e628-42b1-a2c3-f099375ffff3\") " pod="openstack/nova-cell1-conductor-db-sync-dgd9g" Oct 14 13:34:27 crc kubenswrapper[4725]: I1014 13:34:27.348800 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa8d069-e628-42b1-a2c3-f099375ffff3-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-dgd9g\" (UID: \"bfa8d069-e628-42b1-a2c3-f099375ffff3\") " pod="openstack/nova-cell1-conductor-db-sync-dgd9g" Oct 14 13:34:27 crc kubenswrapper[4725]: I1014 13:34:27.348828 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qxmp\" (UniqueName: \"kubernetes.io/projected/bfa8d069-e628-42b1-a2c3-f099375ffff3-kube-api-access-9qxmp\") pod \"nova-cell1-conductor-db-sync-dgd9g\" (UID: \"bfa8d069-e628-42b1-a2c3-f099375ffff3\") " pod="openstack/nova-cell1-conductor-db-sync-dgd9g" Oct 14 13:34:27 crc kubenswrapper[4725]: I1014 13:34:27.348973 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa8d069-e628-42b1-a2c3-f099375ffff3-config-data\") pod \"nova-cell1-conductor-db-sync-dgd9g\" (UID: \"bfa8d069-e628-42b1-a2c3-f099375ffff3\") " pod="openstack/nova-cell1-conductor-db-sync-dgd9g" Oct 14 13:34:27 crc kubenswrapper[4725]: I1014 13:34:27.355239 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfa8d069-e628-42b1-a2c3-f099375ffff3-scripts\") pod \"nova-cell1-conductor-db-sync-dgd9g\" (UID: \"bfa8d069-e628-42b1-a2c3-f099375ffff3\") " pod="openstack/nova-cell1-conductor-db-sync-dgd9g" Oct 14 13:34:27 crc kubenswrapper[4725]: I1014 13:34:27.355366 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa8d069-e628-42b1-a2c3-f099375ffff3-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-dgd9g\" (UID: \"bfa8d069-e628-42b1-a2c3-f099375ffff3\") " pod="openstack/nova-cell1-conductor-db-sync-dgd9g" Oct 14 13:34:27 crc kubenswrapper[4725]: I1014 13:34:27.355388 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa8d069-e628-42b1-a2c3-f099375ffff3-config-data\") pod \"nova-cell1-conductor-db-sync-dgd9g\" (UID: \"bfa8d069-e628-42b1-a2c3-f099375ffff3\") " pod="openstack/nova-cell1-conductor-db-sync-dgd9g" Oct 14 13:34:27 crc kubenswrapper[4725]: I1014 13:34:27.370799 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qxmp\" (UniqueName: \"kubernetes.io/projected/bfa8d069-e628-42b1-a2c3-f099375ffff3-kube-api-access-9qxmp\") pod \"nova-cell1-conductor-db-sync-dgd9g\" (UID: \"bfa8d069-e628-42b1-a2c3-f099375ffff3\") " pod="openstack/nova-cell1-conductor-db-sync-dgd9g" Oct 14 13:34:27 crc kubenswrapper[4725]: I1014 13:34:27.394528 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 13:34:27 crc kubenswrapper[4725]: W1014 13:34:27.408761 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6827455_b45f_478e_84ce_7f4c62c456a0.slice/crio-1a6655c882db87bab0a92459674a36814c0ff2f2a2321ea0f1c5d72d9ccd2f07 WatchSource:0}: Error finding container 1a6655c882db87bab0a92459674a36814c0ff2f2a2321ea0f1c5d72d9ccd2f07: Status 404 returned error can't find the container with id 1a6655c882db87bab0a92459674a36814c0ff2f2a2321ea0f1c5d72d9ccd2f07 Oct 14 13:34:27 crc kubenswrapper[4725]: I1014 13:34:27.472387 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-dgd9g" Oct 14 13:34:27 crc kubenswrapper[4725]: I1014 13:34:27.952389 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"741c0ed3-ec3b-4c57-92d1-eb42974f5383","Type":"ContainerStarted","Data":"b211fee2b8556483f5d7fb1f8e6e87ed5babc3dfc2cea21467c6c8221a46b90d"} Oct 14 13:34:27 crc kubenswrapper[4725]: I1014 13:34:27.952709 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c6827455-b45f-478e-84ce-7f4c62c456a0","Type":"ContainerStarted","Data":"1a6655c882db87bab0a92459674a36814c0ff2f2a2321ea0f1c5d72d9ccd2f07"} Oct 14 13:34:27 crc kubenswrapper[4725]: I1014 13:34:27.952721 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ee779039-892b-461d-aea0-1139654037bf","Type":"ContainerStarted","Data":"212a4b9c3abb6ca202ab959b806b51845446bbc8fe81991901884dc32161cd59"} Oct 14 13:34:27 crc kubenswrapper[4725]: I1014 13:34:27.952731 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8nbmk" event={"ID":"83d1426f-ea53-45ba-a397-e03f35b28711","Type":"ContainerStarted","Data":"020c7744062f32160dc40a4a307a183c598276f990787c6fdd2039dbccc46b1c"} Oct 14 13:34:27 crc kubenswrapper[4725]: I1014 13:34:27.954784 4725 generic.go:334] "Generic (PLEG): container finished" podID="7c0e9f2f-675f-4836-9e5a-27529c6fecb4" containerID="f0e4e5eeca0194c44326b5107f54bef2af41f026daed4d5cab747f61521eaef3" exitCode=0 Oct 14 13:34:27 crc kubenswrapper[4725]: I1014 13:34:27.954839 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-6ssb2" event={"ID":"7c0e9f2f-675f-4836-9e5a-27529c6fecb4","Type":"ContainerDied","Data":"f0e4e5eeca0194c44326b5107f54bef2af41f026daed4d5cab747f61521eaef3"} Oct 14 13:34:27 crc kubenswrapper[4725]: I1014 13:34:27.954862 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-6ssb2" event={"ID":"7c0e9f2f-675f-4836-9e5a-27529c6fecb4","Type":"ContainerStarted","Data":"f6faa51a7b77ac5de8e7ed4767fb2552111a4d72607d0fea8c42b5ee23c14a67"} Oct 14 13:34:27 crc kubenswrapper[4725]: I1014 13:34:27.955994 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dgd9g"] Oct 14 13:34:27 crc kubenswrapper[4725]: I1014 13:34:27.957249 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0d602b0f-b981-4160-8ceb-cf6509cb34b6","Type":"ContainerStarted","Data":"1aea7cdd2cbe532fb1c044b0a30a6f1688e878e4b51624c4ac7ab0551d9797fa"} Oct 14 13:34:27 crc kubenswrapper[4725]: I1014 13:34:27.970717 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-8nbmk" podStartSLOduration=2.970694007 podStartE2EDuration="2.970694007s" podCreationTimestamp="2025-10-14 13:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:34:27.962604264 +0000 UTC m=+1184.811039073" watchObservedRunningTime="2025-10-14 13:34:27.970694007 +0000 UTC m=+1184.819128836" Oct 14 13:34:27 crc kubenswrapper[4725]: W1014 13:34:27.987871 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfa8d069_e628_42b1_a2c3_f099375ffff3.slice/crio-f94820667f06d105b43f2623f0bad7c9f2d5ad84f47cce310221cc22e6319b56 WatchSource:0}: Error finding container f94820667f06d105b43f2623f0bad7c9f2d5ad84f47cce310221cc22e6319b56: Status 404 returned error can't find the container with id f94820667f06d105b43f2623f0bad7c9f2d5ad84f47cce310221cc22e6319b56 Oct 14 13:34:27 crc kubenswrapper[4725]: I1014 13:34:27.995666 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 14 13:34:28 crc kubenswrapper[4725]: I1014 13:34:28.980651 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-dgd9g" event={"ID":"bfa8d069-e628-42b1-a2c3-f099375ffff3","Type":"ContainerStarted","Data":"f94820667f06d105b43f2623f0bad7c9f2d5ad84f47cce310221cc22e6319b56"} Oct 14 13:34:29 crc kubenswrapper[4725]: I1014 13:34:29.617962 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 13:34:29 crc kubenswrapper[4725]: I1014 13:34:29.630827 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 13:34:31 crc kubenswrapper[4725]: I1014 13:34:31.013751 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-6ssb2" event={"ID":"7c0e9f2f-675f-4836-9e5a-27529c6fecb4","Type":"ContainerStarted","Data":"71eaadb72d625e86d643f5d7d463a401a40bffb9fb1340a991113530f500e740"} Oct 14 13:34:31 crc kubenswrapper[4725]: I1014 13:34:31.014243 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-845d6d6f59-6ssb2" Oct 14 13:34:31 crc kubenswrapper[4725]: I1014 13:34:31.016958 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-dgd9g" event={"ID":"bfa8d069-e628-42b1-a2c3-f099375ffff3","Type":"ContainerStarted","Data":"5803a7182b99075f418b8ab57c8d00430a74a5361a003e3ab534e1801fd3e228"} Oct 14 13:34:31 crc kubenswrapper[4725]: I1014 13:34:31.020979 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"741c0ed3-ec3b-4c57-92d1-eb42974f5383","Type":"ContainerStarted","Data":"8da76079f9d2dd0a6e78befd69caec483d3200bbcd6608f2530ec85695cb28a5"} Oct 14 13:34:31 crc kubenswrapper[4725]: I1014 13:34:31.021021 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"741c0ed3-ec3b-4c57-92d1-eb42974f5383","Type":"ContainerStarted","Data":"95ab5842a0f82490d77380f5d1a33d14f89d5eed63745acc96766e0869097331"} Oct 14 13:34:31 crc kubenswrapper[4725]: I1014 13:34:31.031819 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c6827455-b45f-478e-84ce-7f4c62c456a0","Type":"ContainerStarted","Data":"5a6adabe36af2b17ffc9ede101c3da1393344130f75c94d3bc313ca8ae8756da"} Oct 14 13:34:31 crc kubenswrapper[4725]: I1014 13:34:31.031906 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c6827455-b45f-478e-84ce-7f4c62c456a0","Type":"ContainerStarted","Data":"ee333c95676e88a987c5158f30a63d5d7a6fc0082342b2fe2f7cced58694de6e"} Oct 14 13:34:31 crc kubenswrapper[4725]: I1014 13:34:31.032387 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c6827455-b45f-478e-84ce-7f4c62c456a0" containerName="nova-metadata-log" containerID="cri-o://ee333c95676e88a987c5158f30a63d5d7a6fc0082342b2fe2f7cced58694de6e" gracePeriod=30 Oct 14 13:34:31 crc kubenswrapper[4725]: I1014 13:34:31.032660 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c6827455-b45f-478e-84ce-7f4c62c456a0" containerName="nova-metadata-metadata" containerID="cri-o://5a6adabe36af2b17ffc9ede101c3da1393344130f75c94d3bc313ca8ae8756da" gracePeriod=30 Oct 14 13:34:31 crc kubenswrapper[4725]: I1014 13:34:31.032799 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-845d6d6f59-6ssb2" podStartSLOduration=5.03278527 podStartE2EDuration="5.03278527s" podCreationTimestamp="2025-10-14 13:34:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:34:31.030846328 +0000 UTC m=+1187.879281137" watchObservedRunningTime="2025-10-14 13:34:31.03278527 +0000 UTC m=+1187.881220079" Oct 14 13:34:31 crc kubenswrapper[4725]: I1014 13:34:31.042497 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ee779039-892b-461d-aea0-1139654037bf","Type":"ContainerStarted","Data":"b294d6794d812e5edf3ef6a9e958a0e819a4d9cef0713b11e446bf4a94f417e6"} Oct 14 13:34:31 crc kubenswrapper[4725]: I1014 13:34:31.075968 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-dgd9g" podStartSLOduration=4.075925624 podStartE2EDuration="4.075925624s" podCreationTimestamp="2025-10-14 13:34:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:34:31.05119791 +0000 UTC m=+1187.899632729" watchObservedRunningTime="2025-10-14 13:34:31.075925624 +0000 UTC m=+1187.924360443" Oct 14 13:34:31 crc kubenswrapper[4725]: I1014 13:34:31.085626 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.78108715 podStartE2EDuration="5.085582351s" podCreationTimestamp="2025-10-14 13:34:26 +0000 UTC" firstStartedPulling="2025-10-14 13:34:27.254627365 +0000 UTC m=+1184.103062174" lastFinishedPulling="2025-10-14 13:34:29.559122566 +0000 UTC m=+1186.407557375" observedRunningTime="2025-10-14 13:34:31.069827646 +0000 UTC m=+1187.918262455" watchObservedRunningTime="2025-10-14 13:34:31.085582351 +0000 UTC m=+1187.934017170" Oct 14 13:34:31 crc kubenswrapper[4725]: I1014 13:34:31.098427 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.572676677 podStartE2EDuration="6.098410946s" podCreationTimestamp="2025-10-14 13:34:25 +0000 UTC" firstStartedPulling="2025-10-14 13:34:26.977592404 +0000 UTC m=+1183.826027213" lastFinishedPulling="2025-10-14 13:34:29.503326673 +0000 UTC m=+1186.351761482" observedRunningTime="2025-10-14 13:34:31.08408551 +0000 UTC m=+1187.932520359" watchObservedRunningTime="2025-10-14 13:34:31.098410946 +0000 UTC m=+1187.946845775" Oct 14 13:34:31 crc kubenswrapper[4725]: I1014 13:34:31.111874 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.927323735 podStartE2EDuration="5.111840408s" podCreationTimestamp="2025-10-14 13:34:26 +0000 UTC" firstStartedPulling="2025-10-14 13:34:27.410963148 +0000 UTC m=+1184.259397957" lastFinishedPulling="2025-10-14 13:34:29.595479811 +0000 UTC m=+1186.443914630" observedRunningTime="2025-10-14 13:34:31.101886222 +0000 UTC m=+1187.950321031" watchObservedRunningTime="2025-10-14 13:34:31.111840408 +0000 UTC m=+1187.960275217" Oct 14 13:34:31 crc kubenswrapper[4725]: I1014 13:34:31.288969 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 14 13:34:31 crc kubenswrapper[4725]: I1014 13:34:31.575800 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 14 13:34:31 crc kubenswrapper[4725]: I1014 13:34:31.575875 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 14 13:34:32 crc kubenswrapper[4725]: I1014 13:34:32.072574 4725 generic.go:334] "Generic (PLEG): container finished" podID="c6827455-b45f-478e-84ce-7f4c62c456a0" containerID="5a6adabe36af2b17ffc9ede101c3da1393344130f75c94d3bc313ca8ae8756da" exitCode=0 Oct 14 13:34:32 crc kubenswrapper[4725]: I1014 13:34:32.072851 4725 generic.go:334] "Generic (PLEG): container finished" podID="c6827455-b45f-478e-84ce-7f4c62c456a0" containerID="ee333c95676e88a987c5158f30a63d5d7a6fc0082342b2fe2f7cced58694de6e" exitCode=143 Oct 14 13:34:32 crc kubenswrapper[4725]: I1014 13:34:32.073789 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c6827455-b45f-478e-84ce-7f4c62c456a0","Type":"ContainerDied","Data":"5a6adabe36af2b17ffc9ede101c3da1393344130f75c94d3bc313ca8ae8756da"} Oct 14 13:34:32 crc kubenswrapper[4725]: I1014 13:34:32.073821 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c6827455-b45f-478e-84ce-7f4c62c456a0","Type":"ContainerDied","Data":"ee333c95676e88a987c5158f30a63d5d7a6fc0082342b2fe2f7cced58694de6e"} Oct 14 13:34:32 crc kubenswrapper[4725]: I1014 13:34:32.144136 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 13:34:32 crc kubenswrapper[4725]: I1014 13:34:32.259764 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6827455-b45f-478e-84ce-7f4c62c456a0-logs\") pod \"c6827455-b45f-478e-84ce-7f4c62c456a0\" (UID: \"c6827455-b45f-478e-84ce-7f4c62c456a0\") " Oct 14 13:34:32 crc kubenswrapper[4725]: I1014 13:34:32.259837 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6827455-b45f-478e-84ce-7f4c62c456a0-combined-ca-bundle\") pod \"c6827455-b45f-478e-84ce-7f4c62c456a0\" (UID: \"c6827455-b45f-478e-84ce-7f4c62c456a0\") " Oct 14 13:34:32 crc kubenswrapper[4725]: I1014 13:34:32.259988 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkpm7\" (UniqueName: \"kubernetes.io/projected/c6827455-b45f-478e-84ce-7f4c62c456a0-kube-api-access-dkpm7\") pod \"c6827455-b45f-478e-84ce-7f4c62c456a0\" (UID: \"c6827455-b45f-478e-84ce-7f4c62c456a0\") " Oct 14 13:34:32 crc kubenswrapper[4725]: I1014 13:34:32.260044 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6827455-b45f-478e-84ce-7f4c62c456a0-config-data\") pod \"c6827455-b45f-478e-84ce-7f4c62c456a0\" (UID: \"c6827455-b45f-478e-84ce-7f4c62c456a0\") " Oct 14 13:34:32 crc kubenswrapper[4725]: I1014 13:34:32.260165 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6827455-b45f-478e-84ce-7f4c62c456a0-logs" (OuterVolumeSpecName: "logs") pod "c6827455-b45f-478e-84ce-7f4c62c456a0" (UID: "c6827455-b45f-478e-84ce-7f4c62c456a0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:34:32 crc kubenswrapper[4725]: I1014 13:34:32.260377 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6827455-b45f-478e-84ce-7f4c62c456a0-logs\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:32 crc kubenswrapper[4725]: I1014 13:34:32.266399 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6827455-b45f-478e-84ce-7f4c62c456a0-kube-api-access-dkpm7" (OuterVolumeSpecName: "kube-api-access-dkpm7") pod "c6827455-b45f-478e-84ce-7f4c62c456a0" (UID: "c6827455-b45f-478e-84ce-7f4c62c456a0"). InnerVolumeSpecName "kube-api-access-dkpm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:34:32 crc kubenswrapper[4725]: I1014 13:34:32.296663 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6827455-b45f-478e-84ce-7f4c62c456a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6827455-b45f-478e-84ce-7f4c62c456a0" (UID: "c6827455-b45f-478e-84ce-7f4c62c456a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:34:32 crc kubenswrapper[4725]: I1014 13:34:32.302605 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6827455-b45f-478e-84ce-7f4c62c456a0-config-data" (OuterVolumeSpecName: "config-data") pod "c6827455-b45f-478e-84ce-7f4c62c456a0" (UID: "c6827455-b45f-478e-84ce-7f4c62c456a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:34:32 crc kubenswrapper[4725]: I1014 13:34:32.361823 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6827455-b45f-478e-84ce-7f4c62c456a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:32 crc kubenswrapper[4725]: I1014 13:34:32.362137 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkpm7\" (UniqueName: \"kubernetes.io/projected/c6827455-b45f-478e-84ce-7f4c62c456a0-kube-api-access-dkpm7\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:32 crc kubenswrapper[4725]: I1014 13:34:32.362234 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6827455-b45f-478e-84ce-7f4c62c456a0-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:32 crc kubenswrapper[4725]: I1014 13:34:32.521018 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 13:34:32 crc kubenswrapper[4725]: I1014 13:34:32.521078 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 13:34:32 crc kubenswrapper[4725]: I1014 13:34:32.521126 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" Oct 14 13:34:32 crc kubenswrapper[4725]: I1014 13:34:32.521817 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ea00d1d4fd499f409fe49f6dd2f54e9fa910b8219b121324d8eb5e1f54150712"} pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 13:34:32 crc kubenswrapper[4725]: I1014 13:34:32.521875 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" containerID="cri-o://ea00d1d4fd499f409fe49f6dd2f54e9fa910b8219b121324d8eb5e1f54150712" gracePeriod=600 Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.084808 4725 generic.go:334] "Generic (PLEG): container finished" podID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerID="ea00d1d4fd499f409fe49f6dd2f54e9fa910b8219b121324d8eb5e1f54150712" exitCode=0 Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.084860 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" event={"ID":"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c","Type":"ContainerDied","Data":"ea00d1d4fd499f409fe49f6dd2f54e9fa910b8219b121324d8eb5e1f54150712"} Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.085116 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" event={"ID":"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c","Type":"ContainerStarted","Data":"2ed28316bb88cfbc90574ba8c972086f633bc4c19d223765ba69ed10f863412c"} Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.085137 4725 scope.go:117] "RemoveContainer" containerID="aafea91841c00d0b809adb7fadf158b888ea4f36dadb61e7933d5af2c309820b" Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.090758 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c6827455-b45f-478e-84ce-7f4c62c456a0","Type":"ContainerDied","Data":"1a6655c882db87bab0a92459674a36814c0ff2f2a2321ea0f1c5d72d9ccd2f07"} Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.090813 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.096336 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="0d602b0f-b981-4160-8ceb-cf6509cb34b6" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://62f14b0a50026a0cbbe3dc4f1ff7faa1b6f4b345f666ea973771b20c9af8f38e" gracePeriod=30 Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.096643 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0d602b0f-b981-4160-8ceb-cf6509cb34b6","Type":"ContainerStarted","Data":"62f14b0a50026a0cbbe3dc4f1ff7faa1b6f4b345f666ea973771b20c9af8f38e"} Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.152837 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.41659405 podStartE2EDuration="8.152814021s" podCreationTimestamp="2025-10-14 13:34:25 +0000 UTC" firstStartedPulling="2025-10-14 13:34:27.0899427 +0000 UTC m=+1183.938377509" lastFinishedPulling="2025-10-14 13:34:31.826162671 +0000 UTC m=+1188.674597480" observedRunningTime="2025-10-14 13:34:33.132858039 +0000 UTC m=+1189.981292858" watchObservedRunningTime="2025-10-14 13:34:33.152814021 +0000 UTC m=+1190.001248830" Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.153999 4725 scope.go:117] "RemoveContainer" containerID="5a6adabe36af2b17ffc9ede101c3da1393344130f75c94d3bc313ca8ae8756da" Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.205159 4725 scope.go:117] "RemoveContainer" containerID="ee333c95676e88a987c5158f30a63d5d7a6fc0082342b2fe2f7cced58694de6e" Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.231547 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.243307 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.257150 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 14 13:34:33 crc kubenswrapper[4725]: E1014 13:34:33.257572 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6827455-b45f-478e-84ce-7f4c62c456a0" containerName="nova-metadata-metadata" Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.257590 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6827455-b45f-478e-84ce-7f4c62c456a0" containerName="nova-metadata-metadata" Oct 14 13:34:33 crc kubenswrapper[4725]: E1014 13:34:33.257602 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6827455-b45f-478e-84ce-7f4c62c456a0" containerName="nova-metadata-log" Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.257609 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6827455-b45f-478e-84ce-7f4c62c456a0" containerName="nova-metadata-log" Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.257811 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6827455-b45f-478e-84ce-7f4c62c456a0" containerName="nova-metadata-metadata" Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.257838 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6827455-b45f-478e-84ce-7f4c62c456a0" containerName="nova-metadata-log" Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.258826 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.260913 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.263366 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.268863 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 13:34:33 crc kubenswrapper[4725]: E1014 13:34:33.310866 4725 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6827455_b45f_478e_84ce_7f4c62c456a0.slice\": RecentStats: unable to find data in memory cache]" Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.385080 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff4638a2-73f6-4c09-bf81-92ef54269af1-config-data\") pod \"nova-metadata-0\" (UID: \"ff4638a2-73f6-4c09-bf81-92ef54269af1\") " pod="openstack/nova-metadata-0" Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.385482 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff4638a2-73f6-4c09-bf81-92ef54269af1-logs\") pod \"nova-metadata-0\" (UID: \"ff4638a2-73f6-4c09-bf81-92ef54269af1\") " pod="openstack/nova-metadata-0" Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.385573 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff4638a2-73f6-4c09-bf81-92ef54269af1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ff4638a2-73f6-4c09-bf81-92ef54269af1\") " pod="openstack/nova-metadata-0" Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.385654 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff4638a2-73f6-4c09-bf81-92ef54269af1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ff4638a2-73f6-4c09-bf81-92ef54269af1\") " pod="openstack/nova-metadata-0" Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.385741 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6n9q\" (UniqueName: \"kubernetes.io/projected/ff4638a2-73f6-4c09-bf81-92ef54269af1-kube-api-access-r6n9q\") pod \"nova-metadata-0\" (UID: \"ff4638a2-73f6-4c09-bf81-92ef54269af1\") " pod="openstack/nova-metadata-0" Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.487615 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff4638a2-73f6-4c09-bf81-92ef54269af1-logs\") pod \"nova-metadata-0\" (UID: \"ff4638a2-73f6-4c09-bf81-92ef54269af1\") " pod="openstack/nova-metadata-0" Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.487660 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff4638a2-73f6-4c09-bf81-92ef54269af1-config-data\") pod \"nova-metadata-0\" (UID: \"ff4638a2-73f6-4c09-bf81-92ef54269af1\") " pod="openstack/nova-metadata-0" Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.487702 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff4638a2-73f6-4c09-bf81-92ef54269af1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ff4638a2-73f6-4c09-bf81-92ef54269af1\") " pod="openstack/nova-metadata-0" Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.487738 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff4638a2-73f6-4c09-bf81-92ef54269af1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ff4638a2-73f6-4c09-bf81-92ef54269af1\") " pod="openstack/nova-metadata-0" Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.487768 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6n9q\" (UniqueName: \"kubernetes.io/projected/ff4638a2-73f6-4c09-bf81-92ef54269af1-kube-api-access-r6n9q\") pod \"nova-metadata-0\" (UID: \"ff4638a2-73f6-4c09-bf81-92ef54269af1\") " pod="openstack/nova-metadata-0" Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.488085 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff4638a2-73f6-4c09-bf81-92ef54269af1-logs\") pod \"nova-metadata-0\" (UID: \"ff4638a2-73f6-4c09-bf81-92ef54269af1\") " pod="openstack/nova-metadata-0" Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.493830 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff4638a2-73f6-4c09-bf81-92ef54269af1-config-data\") pod \"nova-metadata-0\" (UID: \"ff4638a2-73f6-4c09-bf81-92ef54269af1\") " pod="openstack/nova-metadata-0" Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.500036 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff4638a2-73f6-4c09-bf81-92ef54269af1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ff4638a2-73f6-4c09-bf81-92ef54269af1\") " pod="openstack/nova-metadata-0" Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.500225 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff4638a2-73f6-4c09-bf81-92ef54269af1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ff4638a2-73f6-4c09-bf81-92ef54269af1\") " pod="openstack/nova-metadata-0" Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.508084 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6n9q\" (UniqueName: \"kubernetes.io/projected/ff4638a2-73f6-4c09-bf81-92ef54269af1-kube-api-access-r6n9q\") pod \"nova-metadata-0\" (UID: \"ff4638a2-73f6-4c09-bf81-92ef54269af1\") " pod="openstack/nova-metadata-0" Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.583016 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 13:34:33 crc kubenswrapper[4725]: I1014 13:34:33.931644 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6827455-b45f-478e-84ce-7f4c62c456a0" path="/var/lib/kubelet/pods/c6827455-b45f-478e-84ce-7f4c62c456a0/volumes" Oct 14 13:34:34 crc kubenswrapper[4725]: I1014 13:34:34.039192 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 13:34:34 crc kubenswrapper[4725]: I1014 13:34:34.108561 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff4638a2-73f6-4c09-bf81-92ef54269af1","Type":"ContainerStarted","Data":"33709e57e89e77a3d8e8d9d44b2ce15204538dc0be38ce8e071033f3037da5fc"} Oct 14 13:34:35 crc kubenswrapper[4725]: I1014 13:34:35.120888 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff4638a2-73f6-4c09-bf81-92ef54269af1","Type":"ContainerStarted","Data":"78e077d4b71ec0c6e800d9a1fdd6befb619941e29c153af92a0abdcd6ecac02c"} Oct 14 13:34:35 crc kubenswrapper[4725]: I1014 13:34:35.121342 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff4638a2-73f6-4c09-bf81-92ef54269af1","Type":"ContainerStarted","Data":"71138109d0e56ae9ebd5dbfab710034417c886b975fbbda6a67b05480406d3f9"} Oct 14 13:34:35 crc kubenswrapper[4725]: I1014 13:34:35.123823 4725 generic.go:334] "Generic (PLEG): container finished" podID="83d1426f-ea53-45ba-a397-e03f35b28711" containerID="020c7744062f32160dc40a4a307a183c598276f990787c6fdd2039dbccc46b1c" exitCode=0 Oct 14 13:34:35 crc kubenswrapper[4725]: I1014 13:34:35.123863 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8nbmk" event={"ID":"83d1426f-ea53-45ba-a397-e03f35b28711","Type":"ContainerDied","Data":"020c7744062f32160dc40a4a307a183c598276f990787c6fdd2039dbccc46b1c"} Oct 14 13:34:35 crc kubenswrapper[4725]: I1014 13:34:35.148379 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.148358858 podStartE2EDuration="2.148358858s" podCreationTimestamp="2025-10-14 13:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:34:35.142083184 +0000 UTC m=+1191.990518013" watchObservedRunningTime="2025-10-14 13:34:35.148358858 +0000 UTC m=+1191.996793677" Oct 14 13:34:36 crc kubenswrapper[4725]: I1014 13:34:36.288665 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 14 13:34:36 crc kubenswrapper[4725]: I1014 13:34:36.298866 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:34:36 crc kubenswrapper[4725]: I1014 13:34:36.344328 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 14 13:34:36 crc kubenswrapper[4725]: I1014 13:34:36.508652 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8nbmk" Oct 14 13:34:36 crc kubenswrapper[4725]: I1014 13:34:36.550479 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d1426f-ea53-45ba-a397-e03f35b28711-combined-ca-bundle\") pod \"83d1426f-ea53-45ba-a397-e03f35b28711\" (UID: \"83d1426f-ea53-45ba-a397-e03f35b28711\") " Oct 14 13:34:36 crc kubenswrapper[4725]: I1014 13:34:36.550632 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn2n7\" (UniqueName: \"kubernetes.io/projected/83d1426f-ea53-45ba-a397-e03f35b28711-kube-api-access-gn2n7\") pod \"83d1426f-ea53-45ba-a397-e03f35b28711\" (UID: \"83d1426f-ea53-45ba-a397-e03f35b28711\") " Oct 14 13:34:36 crc kubenswrapper[4725]: I1014 13:34:36.550753 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83d1426f-ea53-45ba-a397-e03f35b28711-config-data\") pod \"83d1426f-ea53-45ba-a397-e03f35b28711\" (UID: \"83d1426f-ea53-45ba-a397-e03f35b28711\") " Oct 14 13:34:36 crc kubenswrapper[4725]: I1014 13:34:36.550853 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83d1426f-ea53-45ba-a397-e03f35b28711-scripts\") pod \"83d1426f-ea53-45ba-a397-e03f35b28711\" (UID: \"83d1426f-ea53-45ba-a397-e03f35b28711\") " Oct 14 13:34:36 crc kubenswrapper[4725]: I1014 13:34:36.554799 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 13:34:36 crc kubenswrapper[4725]: I1014 13:34:36.554846 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 13:34:36 crc kubenswrapper[4725]: I1014 13:34:36.555679 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83d1426f-ea53-45ba-a397-e03f35b28711-scripts" (OuterVolumeSpecName: "scripts") pod "83d1426f-ea53-45ba-a397-e03f35b28711" (UID: "83d1426f-ea53-45ba-a397-e03f35b28711"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:34:36 crc kubenswrapper[4725]: I1014 13:34:36.556862 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83d1426f-ea53-45ba-a397-e03f35b28711-kube-api-access-gn2n7" (OuterVolumeSpecName: "kube-api-access-gn2n7") pod "83d1426f-ea53-45ba-a397-e03f35b28711" (UID: "83d1426f-ea53-45ba-a397-e03f35b28711"). InnerVolumeSpecName "kube-api-access-gn2n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:34:36 crc kubenswrapper[4725]: I1014 13:34:36.586361 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83d1426f-ea53-45ba-a397-e03f35b28711-config-data" (OuterVolumeSpecName: "config-data") pod "83d1426f-ea53-45ba-a397-e03f35b28711" (UID: "83d1426f-ea53-45ba-a397-e03f35b28711"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:34:36 crc kubenswrapper[4725]: I1014 13:34:36.593993 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83d1426f-ea53-45ba-a397-e03f35b28711-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83d1426f-ea53-45ba-a397-e03f35b28711" (UID: "83d1426f-ea53-45ba-a397-e03f35b28711"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:34:36 crc kubenswrapper[4725]: I1014 13:34:36.604746 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-845d6d6f59-6ssb2" Oct 14 13:34:36 crc kubenswrapper[4725]: I1014 13:34:36.654867 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83d1426f-ea53-45ba-a397-e03f35b28711-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:36 crc kubenswrapper[4725]: I1014 13:34:36.654904 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d1426f-ea53-45ba-a397-e03f35b28711-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:36 crc kubenswrapper[4725]: I1014 13:34:36.654918 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn2n7\" (UniqueName: \"kubernetes.io/projected/83d1426f-ea53-45ba-a397-e03f35b28711-kube-api-access-gn2n7\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:36 crc kubenswrapper[4725]: I1014 13:34:36.654930 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83d1426f-ea53-45ba-a397-e03f35b28711-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:36 crc kubenswrapper[4725]: I1014 13:34:36.674823 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-9tc2g"] Oct 14 13:34:36 crc kubenswrapper[4725]: I1014 13:34:36.675146 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5784cf869f-9tc2g" podUID="063acef7-e054-4cf9-80ed-52c7a384754d" containerName="dnsmasq-dns" containerID="cri-o://2156e62902c02a39c6e55d219e7c1825aa4bf75f465431d51fad0db9eb921482" gracePeriod=10 Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.053288 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-9tc2g" Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.150771 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8nbmk" event={"ID":"83d1426f-ea53-45ba-a397-e03f35b28711","Type":"ContainerDied","Data":"6cbc88f15e5c2592743b296013f0fd13399cb7f697fed76bb7fe05adabfd2d23"} Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.150825 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cbc88f15e5c2592743b296013f0fd13399cb7f697fed76bb7fe05adabfd2d23" Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.150792 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8nbmk" Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.158815 4725 generic.go:334] "Generic (PLEG): container finished" podID="bfa8d069-e628-42b1-a2c3-f099375ffff3" containerID="5803a7182b99075f418b8ab57c8d00430a74a5361a003e3ab534e1801fd3e228" exitCode=0 Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.158874 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-dgd9g" event={"ID":"bfa8d069-e628-42b1-a2c3-f099375ffff3","Type":"ContainerDied","Data":"5803a7182b99075f418b8ab57c8d00430a74a5361a003e3ab534e1801fd3e228"} Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.162570 4725 generic.go:334] "Generic (PLEG): container finished" podID="063acef7-e054-4cf9-80ed-52c7a384754d" containerID="2156e62902c02a39c6e55d219e7c1825aa4bf75f465431d51fad0db9eb921482" exitCode=0 Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.162637 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-9tc2g" Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.162671 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-9tc2g" event={"ID":"063acef7-e054-4cf9-80ed-52c7a384754d","Type":"ContainerDied","Data":"2156e62902c02a39c6e55d219e7c1825aa4bf75f465431d51fad0db9eb921482"} Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.162729 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-9tc2g" event={"ID":"063acef7-e054-4cf9-80ed-52c7a384754d","Type":"ContainerDied","Data":"c9ab03ab85c1c7c9837dfabd46b412de4573ffa2d8f13ba4cb3363796855d570"} Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.162755 4725 scope.go:117] "RemoveContainer" containerID="2156e62902c02a39c6e55d219e7c1825aa4bf75f465431d51fad0db9eb921482" Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.163079 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/063acef7-e054-4cf9-80ed-52c7a384754d-ovsdbserver-sb\") pod \"063acef7-e054-4cf9-80ed-52c7a384754d\" (UID: \"063acef7-e054-4cf9-80ed-52c7a384754d\") " Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.163134 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/063acef7-e054-4cf9-80ed-52c7a384754d-dns-svc\") pod \"063acef7-e054-4cf9-80ed-52c7a384754d\" (UID: \"063acef7-e054-4cf9-80ed-52c7a384754d\") " Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.163165 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw2jn\" (UniqueName: \"kubernetes.io/projected/063acef7-e054-4cf9-80ed-52c7a384754d-kube-api-access-vw2jn\") pod \"063acef7-e054-4cf9-80ed-52c7a384754d\" (UID: \"063acef7-e054-4cf9-80ed-52c7a384754d\") " Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.163199 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/063acef7-e054-4cf9-80ed-52c7a384754d-dns-swift-storage-0\") pod \"063acef7-e054-4cf9-80ed-52c7a384754d\" (UID: \"063acef7-e054-4cf9-80ed-52c7a384754d\") " Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.163214 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/063acef7-e054-4cf9-80ed-52c7a384754d-config\") pod \"063acef7-e054-4cf9-80ed-52c7a384754d\" (UID: \"063acef7-e054-4cf9-80ed-52c7a384754d\") " Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.163266 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/063acef7-e054-4cf9-80ed-52c7a384754d-ovsdbserver-nb\") pod \"063acef7-e054-4cf9-80ed-52c7a384754d\" (UID: \"063acef7-e054-4cf9-80ed-52c7a384754d\") " Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.168529 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/063acef7-e054-4cf9-80ed-52c7a384754d-kube-api-access-vw2jn" (OuterVolumeSpecName: "kube-api-access-vw2jn") pod "063acef7-e054-4cf9-80ed-52c7a384754d" (UID: "063acef7-e054-4cf9-80ed-52c7a384754d"). InnerVolumeSpecName "kube-api-access-vw2jn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.190921 4725 scope.go:117] "RemoveContainer" containerID="e2171f1a65477c2a1a866f2f41b85dfe2710ef7caa00a5c98632520985ff599d" Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.225732 4725 scope.go:117] "RemoveContainer" containerID="2156e62902c02a39c6e55d219e7c1825aa4bf75f465431d51fad0db9eb921482" Oct 14 13:34:37 crc kubenswrapper[4725]: E1014 13:34:37.227417 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2156e62902c02a39c6e55d219e7c1825aa4bf75f465431d51fad0db9eb921482\": container with ID starting with 2156e62902c02a39c6e55d219e7c1825aa4bf75f465431d51fad0db9eb921482 not found: ID does not exist" containerID="2156e62902c02a39c6e55d219e7c1825aa4bf75f465431d51fad0db9eb921482" Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.229564 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2156e62902c02a39c6e55d219e7c1825aa4bf75f465431d51fad0db9eb921482"} err="failed to get container status \"2156e62902c02a39c6e55d219e7c1825aa4bf75f465431d51fad0db9eb921482\": rpc error: code = NotFound desc = could not find container \"2156e62902c02a39c6e55d219e7c1825aa4bf75f465431d51fad0db9eb921482\": container with ID starting with 2156e62902c02a39c6e55d219e7c1825aa4bf75f465431d51fad0db9eb921482 not found: ID does not exist" Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.229689 4725 scope.go:117] "RemoveContainer" containerID="e2171f1a65477c2a1a866f2f41b85dfe2710ef7caa00a5c98632520985ff599d" Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.230170 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/063acef7-e054-4cf9-80ed-52c7a384754d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "063acef7-e054-4cf9-80ed-52c7a384754d" (UID: "063acef7-e054-4cf9-80ed-52c7a384754d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:34:37 crc kubenswrapper[4725]: E1014 13:34:37.230240 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2171f1a65477c2a1a866f2f41b85dfe2710ef7caa00a5c98632520985ff599d\": container with ID starting with e2171f1a65477c2a1a866f2f41b85dfe2710ef7caa00a5c98632520985ff599d not found: ID does not exist" containerID="e2171f1a65477c2a1a866f2f41b85dfe2710ef7caa00a5c98632520985ff599d" Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.230289 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2171f1a65477c2a1a866f2f41b85dfe2710ef7caa00a5c98632520985ff599d"} err="failed to get container status \"e2171f1a65477c2a1a866f2f41b85dfe2710ef7caa00a5c98632520985ff599d\": rpc error: code = NotFound desc = could not find container \"e2171f1a65477c2a1a866f2f41b85dfe2710ef7caa00a5c98632520985ff599d\": container with ID starting with e2171f1a65477c2a1a866f2f41b85dfe2710ef7caa00a5c98632520985ff599d not found: ID does not exist" Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.234103 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/063acef7-e054-4cf9-80ed-52c7a384754d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "063acef7-e054-4cf9-80ed-52c7a384754d" (UID: "063acef7-e054-4cf9-80ed-52c7a384754d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.237559 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/063acef7-e054-4cf9-80ed-52c7a384754d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "063acef7-e054-4cf9-80ed-52c7a384754d" (UID: "063acef7-e054-4cf9-80ed-52c7a384754d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.238190 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/063acef7-e054-4cf9-80ed-52c7a384754d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "063acef7-e054-4cf9-80ed-52c7a384754d" (UID: "063acef7-e054-4cf9-80ed-52c7a384754d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.240326 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.266056 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/063acef7-e054-4cf9-80ed-52c7a384754d-config" (OuterVolumeSpecName: "config") pod "063acef7-e054-4cf9-80ed-52c7a384754d" (UID: "063acef7-e054-4cf9-80ed-52c7a384754d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.266437 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/063acef7-e054-4cf9-80ed-52c7a384754d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.266485 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/063acef7-e054-4cf9-80ed-52c7a384754d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.266545 4725 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/063acef7-e054-4cf9-80ed-52c7a384754d-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.266559 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vw2jn\" (UniqueName: \"kubernetes.io/projected/063acef7-e054-4cf9-80ed-52c7a384754d-kube-api-access-vw2jn\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.266665 4725 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/063acef7-e054-4cf9-80ed-52c7a384754d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.266678 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/063acef7-e054-4cf9-80ed-52c7a384754d-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.357417 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.357692 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="741c0ed3-ec3b-4c57-92d1-eb42974f5383" containerName="nova-api-log" containerID="cri-o://95ab5842a0f82490d77380f5d1a33d14f89d5eed63745acc96766e0869097331" gracePeriod=30 Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.357797 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="741c0ed3-ec3b-4c57-92d1-eb42974f5383" containerName="nova-api-api" containerID="cri-o://8da76079f9d2dd0a6e78befd69caec483d3200bbcd6608f2530ec85695cb28a5" gracePeriod=30 Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.362837 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="741c0ed3-ec3b-4c57-92d1-eb42974f5383" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.189:8774/\": EOF" Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.362943 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="741c0ed3-ec3b-4c57-92d1-eb42974f5383" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.189:8774/\": EOF" Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.373958 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.374269 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ff4638a2-73f6-4c09-bf81-92ef54269af1" containerName="nova-metadata-log" containerID="cri-o://71138109d0e56ae9ebd5dbfab710034417c886b975fbbda6a67b05480406d3f9" gracePeriod=30 Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.374312 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ff4638a2-73f6-4c09-bf81-92ef54269af1" containerName="nova-metadata-metadata" containerID="cri-o://78e077d4b71ec0c6e800d9a1fdd6befb619941e29c153af92a0abdcd6ecac02c" gracePeriod=30 Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.669907 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-9tc2g"] Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.681356 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-9tc2g"] Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.754362 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.928790 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.937570 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="063acef7-e054-4cf9-80ed-52c7a384754d" path="/var/lib/kubelet/pods/063acef7-e054-4cf9-80ed-52c7a384754d/volumes" Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.979736 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff4638a2-73f6-4c09-bf81-92ef54269af1-nova-metadata-tls-certs\") pod \"ff4638a2-73f6-4c09-bf81-92ef54269af1\" (UID: \"ff4638a2-73f6-4c09-bf81-92ef54269af1\") " Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.979862 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff4638a2-73f6-4c09-bf81-92ef54269af1-combined-ca-bundle\") pod \"ff4638a2-73f6-4c09-bf81-92ef54269af1\" (UID: \"ff4638a2-73f6-4c09-bf81-92ef54269af1\") " Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.979907 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff4638a2-73f6-4c09-bf81-92ef54269af1-config-data\") pod \"ff4638a2-73f6-4c09-bf81-92ef54269af1\" (UID: \"ff4638a2-73f6-4c09-bf81-92ef54269af1\") " Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.980021 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6n9q\" (UniqueName: \"kubernetes.io/projected/ff4638a2-73f6-4c09-bf81-92ef54269af1-kube-api-access-r6n9q\") pod \"ff4638a2-73f6-4c09-bf81-92ef54269af1\" (UID: \"ff4638a2-73f6-4c09-bf81-92ef54269af1\") " Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.980096 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff4638a2-73f6-4c09-bf81-92ef54269af1-logs\") pod \"ff4638a2-73f6-4c09-bf81-92ef54269af1\" (UID: \"ff4638a2-73f6-4c09-bf81-92ef54269af1\") " Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.981891 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff4638a2-73f6-4c09-bf81-92ef54269af1-logs" (OuterVolumeSpecName: "logs") pod "ff4638a2-73f6-4c09-bf81-92ef54269af1" (UID: "ff4638a2-73f6-4c09-bf81-92ef54269af1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:34:37 crc kubenswrapper[4725]: I1014 13:34:37.985746 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff4638a2-73f6-4c09-bf81-92ef54269af1-kube-api-access-r6n9q" (OuterVolumeSpecName: "kube-api-access-r6n9q") pod "ff4638a2-73f6-4c09-bf81-92ef54269af1" (UID: "ff4638a2-73f6-4c09-bf81-92ef54269af1"). InnerVolumeSpecName "kube-api-access-r6n9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.008245 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff4638a2-73f6-4c09-bf81-92ef54269af1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff4638a2-73f6-4c09-bf81-92ef54269af1" (UID: "ff4638a2-73f6-4c09-bf81-92ef54269af1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.012617 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff4638a2-73f6-4c09-bf81-92ef54269af1-config-data" (OuterVolumeSpecName: "config-data") pod "ff4638a2-73f6-4c09-bf81-92ef54269af1" (UID: "ff4638a2-73f6-4c09-bf81-92ef54269af1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.027866 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff4638a2-73f6-4c09-bf81-92ef54269af1-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ff4638a2-73f6-4c09-bf81-92ef54269af1" (UID: "ff4638a2-73f6-4c09-bf81-92ef54269af1"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.082892 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff4638a2-73f6-4c09-bf81-92ef54269af1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.082923 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff4638a2-73f6-4c09-bf81-92ef54269af1-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.082931 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6n9q\" (UniqueName: \"kubernetes.io/projected/ff4638a2-73f6-4c09-bf81-92ef54269af1-kube-api-access-r6n9q\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.082941 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff4638a2-73f6-4c09-bf81-92ef54269af1-logs\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.082949 4725 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff4638a2-73f6-4c09-bf81-92ef54269af1-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.177964 4725 generic.go:334] "Generic (PLEG): container finished" podID="ff4638a2-73f6-4c09-bf81-92ef54269af1" containerID="78e077d4b71ec0c6e800d9a1fdd6befb619941e29c153af92a0abdcd6ecac02c" exitCode=0 Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.178016 4725 generic.go:334] "Generic (PLEG): container finished" podID="ff4638a2-73f6-4c09-bf81-92ef54269af1" containerID="71138109d0e56ae9ebd5dbfab710034417c886b975fbbda6a67b05480406d3f9" exitCode=143 Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.178053 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff4638a2-73f6-4c09-bf81-92ef54269af1","Type":"ContainerDied","Data":"78e077d4b71ec0c6e800d9a1fdd6befb619941e29c153af92a0abdcd6ecac02c"} Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.178101 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff4638a2-73f6-4c09-bf81-92ef54269af1","Type":"ContainerDied","Data":"71138109d0e56ae9ebd5dbfab710034417c886b975fbbda6a67b05480406d3f9"} Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.178113 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ff4638a2-73f6-4c09-bf81-92ef54269af1","Type":"ContainerDied","Data":"33709e57e89e77a3d8e8d9d44b2ce15204538dc0be38ce8e071033f3037da5fc"} Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.178130 4725 scope.go:117] "RemoveContainer" containerID="78e077d4b71ec0c6e800d9a1fdd6befb619941e29c153af92a0abdcd6ecac02c" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.178125 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.180672 4725 generic.go:334] "Generic (PLEG): container finished" podID="741c0ed3-ec3b-4c57-92d1-eb42974f5383" containerID="95ab5842a0f82490d77380f5d1a33d14f89d5eed63745acc96766e0869097331" exitCode=143 Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.181861 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"741c0ed3-ec3b-4c57-92d1-eb42974f5383","Type":"ContainerDied","Data":"95ab5842a0f82490d77380f5d1a33d14f89d5eed63745acc96766e0869097331"} Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.222173 4725 scope.go:117] "RemoveContainer" containerID="71138109d0e56ae9ebd5dbfab710034417c886b975fbbda6a67b05480406d3f9" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.222311 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.231734 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.241537 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 14 13:34:38 crc kubenswrapper[4725]: E1014 13:34:38.242299 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="063acef7-e054-4cf9-80ed-52c7a384754d" containerName="init" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.242324 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="063acef7-e054-4cf9-80ed-52c7a384754d" containerName="init" Oct 14 13:34:38 crc kubenswrapper[4725]: E1014 13:34:38.242355 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="063acef7-e054-4cf9-80ed-52c7a384754d" containerName="dnsmasq-dns" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.242363 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="063acef7-e054-4cf9-80ed-52c7a384754d" containerName="dnsmasq-dns" Oct 14 13:34:38 crc kubenswrapper[4725]: E1014 13:34:38.242383 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83d1426f-ea53-45ba-a397-e03f35b28711" containerName="nova-manage" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.242389 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="83d1426f-ea53-45ba-a397-e03f35b28711" containerName="nova-manage" Oct 14 13:34:38 crc kubenswrapper[4725]: E1014 13:34:38.242404 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff4638a2-73f6-4c09-bf81-92ef54269af1" containerName="nova-metadata-log" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.242410 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff4638a2-73f6-4c09-bf81-92ef54269af1" containerName="nova-metadata-log" Oct 14 13:34:38 crc kubenswrapper[4725]: E1014 13:34:38.242430 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff4638a2-73f6-4c09-bf81-92ef54269af1" containerName="nova-metadata-metadata" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.242437 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff4638a2-73f6-4c09-bf81-92ef54269af1" containerName="nova-metadata-metadata" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.242832 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="83d1426f-ea53-45ba-a397-e03f35b28711" containerName="nova-manage" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.242858 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="063acef7-e054-4cf9-80ed-52c7a384754d" containerName="dnsmasq-dns" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.242873 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff4638a2-73f6-4c09-bf81-92ef54269af1" containerName="nova-metadata-log" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.242892 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff4638a2-73f6-4c09-bf81-92ef54269af1" containerName="nova-metadata-metadata" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.244699 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.254915 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.255246 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.278411 4725 scope.go:117] "RemoveContainer" containerID="78e077d4b71ec0c6e800d9a1fdd6befb619941e29c153af92a0abdcd6ecac02c" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.279847 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 13:34:38 crc kubenswrapper[4725]: E1014 13:34:38.280719 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78e077d4b71ec0c6e800d9a1fdd6befb619941e29c153af92a0abdcd6ecac02c\": container with ID starting with 78e077d4b71ec0c6e800d9a1fdd6befb619941e29c153af92a0abdcd6ecac02c not found: ID does not exist" containerID="78e077d4b71ec0c6e800d9a1fdd6befb619941e29c153af92a0abdcd6ecac02c" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.280785 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78e077d4b71ec0c6e800d9a1fdd6befb619941e29c153af92a0abdcd6ecac02c"} err="failed to get container status \"78e077d4b71ec0c6e800d9a1fdd6befb619941e29c153af92a0abdcd6ecac02c\": rpc error: code = NotFound desc = could not find container \"78e077d4b71ec0c6e800d9a1fdd6befb619941e29c153af92a0abdcd6ecac02c\": container with ID starting with 78e077d4b71ec0c6e800d9a1fdd6befb619941e29c153af92a0abdcd6ecac02c not found: ID does not exist" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.280812 4725 scope.go:117] "RemoveContainer" containerID="71138109d0e56ae9ebd5dbfab710034417c886b975fbbda6a67b05480406d3f9" Oct 14 13:34:38 crc kubenswrapper[4725]: E1014 13:34:38.283138 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71138109d0e56ae9ebd5dbfab710034417c886b975fbbda6a67b05480406d3f9\": container with ID starting with 71138109d0e56ae9ebd5dbfab710034417c886b975fbbda6a67b05480406d3f9 not found: ID does not exist" containerID="71138109d0e56ae9ebd5dbfab710034417c886b975fbbda6a67b05480406d3f9" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.283241 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71138109d0e56ae9ebd5dbfab710034417c886b975fbbda6a67b05480406d3f9"} err="failed to get container status \"71138109d0e56ae9ebd5dbfab710034417c886b975fbbda6a67b05480406d3f9\": rpc error: code = NotFound desc = could not find container \"71138109d0e56ae9ebd5dbfab710034417c886b975fbbda6a67b05480406d3f9\": container with ID starting with 71138109d0e56ae9ebd5dbfab710034417c886b975fbbda6a67b05480406d3f9 not found: ID does not exist" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.283269 4725 scope.go:117] "RemoveContainer" containerID="78e077d4b71ec0c6e800d9a1fdd6befb619941e29c153af92a0abdcd6ecac02c" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.284623 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78e077d4b71ec0c6e800d9a1fdd6befb619941e29c153af92a0abdcd6ecac02c"} err="failed to get container status \"78e077d4b71ec0c6e800d9a1fdd6befb619941e29c153af92a0abdcd6ecac02c\": rpc error: code = NotFound desc = could not find container \"78e077d4b71ec0c6e800d9a1fdd6befb619941e29c153af92a0abdcd6ecac02c\": container with ID starting with 78e077d4b71ec0c6e800d9a1fdd6befb619941e29c153af92a0abdcd6ecac02c not found: ID does not exist" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.284680 4725 scope.go:117] "RemoveContainer" containerID="71138109d0e56ae9ebd5dbfab710034417c886b975fbbda6a67b05480406d3f9" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.285606 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71138109d0e56ae9ebd5dbfab710034417c886b975fbbda6a67b05480406d3f9"} err="failed to get container status \"71138109d0e56ae9ebd5dbfab710034417c886b975fbbda6a67b05480406d3f9\": rpc error: code = NotFound desc = could not find container \"71138109d0e56ae9ebd5dbfab710034417c886b975fbbda6a67b05480406d3f9\": container with ID starting with 71138109d0e56ae9ebd5dbfab710034417c886b975fbbda6a67b05480406d3f9 not found: ID does not exist" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.288372 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfxcr\" (UniqueName: \"kubernetes.io/projected/a2f2aea4-7075-45b5-94c7-73d1b181beb8-kube-api-access-kfxcr\") pod \"nova-metadata-0\" (UID: \"a2f2aea4-7075-45b5-94c7-73d1b181beb8\") " pod="openstack/nova-metadata-0" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.288435 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2f2aea4-7075-45b5-94c7-73d1b181beb8-config-data\") pod \"nova-metadata-0\" (UID: \"a2f2aea4-7075-45b5-94c7-73d1b181beb8\") " pod="openstack/nova-metadata-0" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.288487 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f2aea4-7075-45b5-94c7-73d1b181beb8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a2f2aea4-7075-45b5-94c7-73d1b181beb8\") " pod="openstack/nova-metadata-0" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.288569 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2f2aea4-7075-45b5-94c7-73d1b181beb8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a2f2aea4-7075-45b5-94c7-73d1b181beb8\") " pod="openstack/nova-metadata-0" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.288671 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2f2aea4-7075-45b5-94c7-73d1b181beb8-logs\") pod \"nova-metadata-0\" (UID: \"a2f2aea4-7075-45b5-94c7-73d1b181beb8\") " pod="openstack/nova-metadata-0" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.390236 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2f2aea4-7075-45b5-94c7-73d1b181beb8-logs\") pod \"nova-metadata-0\" (UID: \"a2f2aea4-7075-45b5-94c7-73d1b181beb8\") " pod="openstack/nova-metadata-0" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.390327 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfxcr\" (UniqueName: \"kubernetes.io/projected/a2f2aea4-7075-45b5-94c7-73d1b181beb8-kube-api-access-kfxcr\") pod \"nova-metadata-0\" (UID: \"a2f2aea4-7075-45b5-94c7-73d1b181beb8\") " pod="openstack/nova-metadata-0" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.390370 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2f2aea4-7075-45b5-94c7-73d1b181beb8-config-data\") pod \"nova-metadata-0\" (UID: \"a2f2aea4-7075-45b5-94c7-73d1b181beb8\") " pod="openstack/nova-metadata-0" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.390417 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f2aea4-7075-45b5-94c7-73d1b181beb8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a2f2aea4-7075-45b5-94c7-73d1b181beb8\") " pod="openstack/nova-metadata-0" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.390512 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2f2aea4-7075-45b5-94c7-73d1b181beb8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a2f2aea4-7075-45b5-94c7-73d1b181beb8\") " pod="openstack/nova-metadata-0" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.391861 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2f2aea4-7075-45b5-94c7-73d1b181beb8-logs\") pod \"nova-metadata-0\" (UID: \"a2f2aea4-7075-45b5-94c7-73d1b181beb8\") " pod="openstack/nova-metadata-0" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.396985 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2f2aea4-7075-45b5-94c7-73d1b181beb8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a2f2aea4-7075-45b5-94c7-73d1b181beb8\") " pod="openstack/nova-metadata-0" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.398849 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f2aea4-7075-45b5-94c7-73d1b181beb8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a2f2aea4-7075-45b5-94c7-73d1b181beb8\") " pod="openstack/nova-metadata-0" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.399593 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2f2aea4-7075-45b5-94c7-73d1b181beb8-config-data\") pod \"nova-metadata-0\" (UID: \"a2f2aea4-7075-45b5-94c7-73d1b181beb8\") " pod="openstack/nova-metadata-0" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.407239 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfxcr\" (UniqueName: \"kubernetes.io/projected/a2f2aea4-7075-45b5-94c7-73d1b181beb8-kube-api-access-kfxcr\") pod \"nova-metadata-0\" (UID: \"a2f2aea4-7075-45b5-94c7-73d1b181beb8\") " pod="openstack/nova-metadata-0" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.575763 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-dgd9g" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.591745 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.695721 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa8d069-e628-42b1-a2c3-f099375ffff3-config-data\") pod \"bfa8d069-e628-42b1-a2c3-f099375ffff3\" (UID: \"bfa8d069-e628-42b1-a2c3-f099375ffff3\") " Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.695842 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qxmp\" (UniqueName: \"kubernetes.io/projected/bfa8d069-e628-42b1-a2c3-f099375ffff3-kube-api-access-9qxmp\") pod \"bfa8d069-e628-42b1-a2c3-f099375ffff3\" (UID: \"bfa8d069-e628-42b1-a2c3-f099375ffff3\") " Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.695903 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfa8d069-e628-42b1-a2c3-f099375ffff3-scripts\") pod \"bfa8d069-e628-42b1-a2c3-f099375ffff3\" (UID: \"bfa8d069-e628-42b1-a2c3-f099375ffff3\") " Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.695974 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa8d069-e628-42b1-a2c3-f099375ffff3-combined-ca-bundle\") pod \"bfa8d069-e628-42b1-a2c3-f099375ffff3\" (UID: \"bfa8d069-e628-42b1-a2c3-f099375ffff3\") " Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.700594 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa8d069-e628-42b1-a2c3-f099375ffff3-scripts" (OuterVolumeSpecName: "scripts") pod "bfa8d069-e628-42b1-a2c3-f099375ffff3" (UID: "bfa8d069-e628-42b1-a2c3-f099375ffff3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.701126 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfa8d069-e628-42b1-a2c3-f099375ffff3-kube-api-access-9qxmp" (OuterVolumeSpecName: "kube-api-access-9qxmp") pod "bfa8d069-e628-42b1-a2c3-f099375ffff3" (UID: "bfa8d069-e628-42b1-a2c3-f099375ffff3"). InnerVolumeSpecName "kube-api-access-9qxmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.724617 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa8d069-e628-42b1-a2c3-f099375ffff3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bfa8d069-e628-42b1-a2c3-f099375ffff3" (UID: "bfa8d069-e628-42b1-a2c3-f099375ffff3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.729067 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa8d069-e628-42b1-a2c3-f099375ffff3-config-data" (OuterVolumeSpecName: "config-data") pod "bfa8d069-e628-42b1-a2c3-f099375ffff3" (UID: "bfa8d069-e628-42b1-a2c3-f099375ffff3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.798103 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfa8d069-e628-42b1-a2c3-f099375ffff3-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.798139 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa8d069-e628-42b1-a2c3-f099375ffff3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.798150 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa8d069-e628-42b1-a2c3-f099375ffff3-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:38 crc kubenswrapper[4725]: I1014 13:34:38.798159 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qxmp\" (UniqueName: \"kubernetes.io/projected/bfa8d069-e628-42b1-a2c3-f099375ffff3-kube-api-access-9qxmp\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:39 crc kubenswrapper[4725]: I1014 13:34:39.098636 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 13:34:39 crc kubenswrapper[4725]: I1014 13:34:39.194819 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a2f2aea4-7075-45b5-94c7-73d1b181beb8","Type":"ContainerStarted","Data":"63fbff28b6fc7653665ca7546fd9a2e6d492fb461224604cd8a2746faf98fbab"} Oct 14 13:34:39 crc kubenswrapper[4725]: I1014 13:34:39.197008 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="ee779039-892b-461d-aea0-1139654037bf" containerName="nova-scheduler-scheduler" containerID="cri-o://b294d6794d812e5edf3ef6a9e958a0e819a4d9cef0713b11e446bf4a94f417e6" gracePeriod=30 Oct 14 13:34:39 crc kubenswrapper[4725]: I1014 13:34:39.197415 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-dgd9g" Oct 14 13:34:39 crc kubenswrapper[4725]: I1014 13:34:39.197852 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-dgd9g" event={"ID":"bfa8d069-e628-42b1-a2c3-f099375ffff3","Type":"ContainerDied","Data":"f94820667f06d105b43f2623f0bad7c9f2d5ad84f47cce310221cc22e6319b56"} Oct 14 13:34:39 crc kubenswrapper[4725]: I1014 13:34:39.197888 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f94820667f06d105b43f2623f0bad7c9f2d5ad84f47cce310221cc22e6319b56" Oct 14 13:34:39 crc kubenswrapper[4725]: I1014 13:34:39.272778 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 14 13:34:39 crc kubenswrapper[4725]: E1014 13:34:39.273275 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfa8d069-e628-42b1-a2c3-f099375ffff3" containerName="nova-cell1-conductor-db-sync" Oct 14 13:34:39 crc kubenswrapper[4725]: I1014 13:34:39.273299 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfa8d069-e628-42b1-a2c3-f099375ffff3" containerName="nova-cell1-conductor-db-sync" Oct 14 13:34:39 crc kubenswrapper[4725]: I1014 13:34:39.273580 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfa8d069-e628-42b1-a2c3-f099375ffff3" containerName="nova-cell1-conductor-db-sync" Oct 14 13:34:39 crc kubenswrapper[4725]: I1014 13:34:39.274476 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 14 13:34:39 crc kubenswrapper[4725]: I1014 13:34:39.277063 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 14 13:34:39 crc kubenswrapper[4725]: I1014 13:34:39.293815 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 14 13:34:39 crc kubenswrapper[4725]: I1014 13:34:39.307882 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/307135f2-8895-4193-aab6-077f7bd59bec-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"307135f2-8895-4193-aab6-077f7bd59bec\") " pod="openstack/nova-cell1-conductor-0" Oct 14 13:34:39 crc kubenswrapper[4725]: I1014 13:34:39.307992 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/307135f2-8895-4193-aab6-077f7bd59bec-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"307135f2-8895-4193-aab6-077f7bd59bec\") " pod="openstack/nova-cell1-conductor-0" Oct 14 13:34:39 crc kubenswrapper[4725]: I1014 13:34:39.308124 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbgvt\" (UniqueName: \"kubernetes.io/projected/307135f2-8895-4193-aab6-077f7bd59bec-kube-api-access-kbgvt\") pod \"nova-cell1-conductor-0\" (UID: \"307135f2-8895-4193-aab6-077f7bd59bec\") " pod="openstack/nova-cell1-conductor-0" Oct 14 13:34:39 crc kubenswrapper[4725]: I1014 13:34:39.410137 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbgvt\" (UniqueName: \"kubernetes.io/projected/307135f2-8895-4193-aab6-077f7bd59bec-kube-api-access-kbgvt\") pod \"nova-cell1-conductor-0\" (UID: \"307135f2-8895-4193-aab6-077f7bd59bec\") " pod="openstack/nova-cell1-conductor-0" Oct 14 13:34:39 crc kubenswrapper[4725]: I1014 13:34:39.410237 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/307135f2-8895-4193-aab6-077f7bd59bec-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"307135f2-8895-4193-aab6-077f7bd59bec\") " pod="openstack/nova-cell1-conductor-0" Oct 14 13:34:39 crc kubenswrapper[4725]: I1014 13:34:39.410365 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/307135f2-8895-4193-aab6-077f7bd59bec-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"307135f2-8895-4193-aab6-077f7bd59bec\") " pod="openstack/nova-cell1-conductor-0" Oct 14 13:34:39 crc kubenswrapper[4725]: I1014 13:34:39.420398 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/307135f2-8895-4193-aab6-077f7bd59bec-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"307135f2-8895-4193-aab6-077f7bd59bec\") " pod="openstack/nova-cell1-conductor-0" Oct 14 13:34:39 crc kubenswrapper[4725]: I1014 13:34:39.420952 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/307135f2-8895-4193-aab6-077f7bd59bec-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"307135f2-8895-4193-aab6-077f7bd59bec\") " pod="openstack/nova-cell1-conductor-0" Oct 14 13:34:39 crc kubenswrapper[4725]: I1014 13:34:39.430423 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbgvt\" (UniqueName: \"kubernetes.io/projected/307135f2-8895-4193-aab6-077f7bd59bec-kube-api-access-kbgvt\") pod \"nova-cell1-conductor-0\" (UID: \"307135f2-8895-4193-aab6-077f7bd59bec\") " pod="openstack/nova-cell1-conductor-0" Oct 14 13:34:39 crc kubenswrapper[4725]: I1014 13:34:39.633672 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 14 13:34:39 crc kubenswrapper[4725]: I1014 13:34:39.938734 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff4638a2-73f6-4c09-bf81-92ef54269af1" path="/var/lib/kubelet/pods/ff4638a2-73f6-4c09-bf81-92ef54269af1/volumes" Oct 14 13:34:40 crc kubenswrapper[4725]: I1014 13:34:40.125524 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 14 13:34:40 crc kubenswrapper[4725]: I1014 13:34:40.205791 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"307135f2-8895-4193-aab6-077f7bd59bec","Type":"ContainerStarted","Data":"efec09a7e47be97f5e16ca9f4ce7d1e245a66aa6657adbf2c050051653837096"} Oct 14 13:34:40 crc kubenswrapper[4725]: I1014 13:34:40.207558 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a2f2aea4-7075-45b5-94c7-73d1b181beb8","Type":"ContainerStarted","Data":"4c954ba509c4060eb493db7ab8acda5bcdc4598fd7350f9069bde8693aeb4956"} Oct 14 13:34:40 crc kubenswrapper[4725]: I1014 13:34:40.207600 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a2f2aea4-7075-45b5-94c7-73d1b181beb8","Type":"ContainerStarted","Data":"4a56d9d6286fccae08cae0cd622f550cd44a5275731826ef484cbc061f89a5ca"} Oct 14 13:34:40 crc kubenswrapper[4725]: I1014 13:34:40.232878 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.232858301 podStartE2EDuration="2.232858301s" podCreationTimestamp="2025-10-14 13:34:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:34:40.223995276 +0000 UTC m=+1197.072430095" watchObservedRunningTime="2025-10-14 13:34:40.232858301 +0000 UTC m=+1197.081293130" Oct 14 13:34:41 crc kubenswrapper[4725]: I1014 13:34:41.224789 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"307135f2-8895-4193-aab6-077f7bd59bec","Type":"ContainerStarted","Data":"1f0d3cc0771e8847e2f299a8e2c8f3e1a678026cbc7553bd8b952b2f80493c69"} Oct 14 13:34:41 crc kubenswrapper[4725]: I1014 13:34:41.242781 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.242756161 podStartE2EDuration="2.242756161s" podCreationTimestamp="2025-10-14 13:34:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:34:41.241714772 +0000 UTC m=+1198.090149621" watchObservedRunningTime="2025-10-14 13:34:41.242756161 +0000 UTC m=+1198.091191000" Oct 14 13:34:41 crc kubenswrapper[4725]: E1014 13:34:41.295978 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b294d6794d812e5edf3ef6a9e958a0e819a4d9cef0713b11e446bf4a94f417e6" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 14 13:34:41 crc kubenswrapper[4725]: E1014 13:34:41.298989 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b294d6794d812e5edf3ef6a9e958a0e819a4d9cef0713b11e446bf4a94f417e6" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 14 13:34:41 crc kubenswrapper[4725]: E1014 13:34:41.302325 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b294d6794d812e5edf3ef6a9e958a0e819a4d9cef0713b11e446bf4a94f417e6" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 14 13:34:41 crc kubenswrapper[4725]: E1014 13:34:41.302404 4725 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="ee779039-892b-461d-aea0-1139654037bf" containerName="nova-scheduler-scheduler" Oct 14 13:34:41 crc kubenswrapper[4725]: I1014 13:34:41.997118 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.051614 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5784cf869f-9tc2g" podUID="063acef7-e054-4cf9-80ed-52c7a384754d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.163:5353: i/o timeout" Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.067239 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee779039-892b-461d-aea0-1139654037bf-combined-ca-bundle\") pod \"ee779039-892b-461d-aea0-1139654037bf\" (UID: \"ee779039-892b-461d-aea0-1139654037bf\") " Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.067362 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee779039-892b-461d-aea0-1139654037bf-config-data\") pod \"ee779039-892b-461d-aea0-1139654037bf\" (UID: \"ee779039-892b-461d-aea0-1139654037bf\") " Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.067567 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2nrb\" (UniqueName: \"kubernetes.io/projected/ee779039-892b-461d-aea0-1139654037bf-kube-api-access-x2nrb\") pod \"ee779039-892b-461d-aea0-1139654037bf\" (UID: \"ee779039-892b-461d-aea0-1139654037bf\") " Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.073273 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee779039-892b-461d-aea0-1139654037bf-kube-api-access-x2nrb" (OuterVolumeSpecName: "kube-api-access-x2nrb") pod "ee779039-892b-461d-aea0-1139654037bf" (UID: "ee779039-892b-461d-aea0-1139654037bf"). InnerVolumeSpecName "kube-api-access-x2nrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.098729 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee779039-892b-461d-aea0-1139654037bf-config-data" (OuterVolumeSpecName: "config-data") pod "ee779039-892b-461d-aea0-1139654037bf" (UID: "ee779039-892b-461d-aea0-1139654037bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.108891 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee779039-892b-461d-aea0-1139654037bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee779039-892b-461d-aea0-1139654037bf" (UID: "ee779039-892b-461d-aea0-1139654037bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.169958 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2nrb\" (UniqueName: \"kubernetes.io/projected/ee779039-892b-461d-aea0-1139654037bf-kube-api-access-x2nrb\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.169991 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee779039-892b-461d-aea0-1139654037bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.170001 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee779039-892b-461d-aea0-1139654037bf-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.236048 4725 generic.go:334] "Generic (PLEG): container finished" podID="ee779039-892b-461d-aea0-1139654037bf" containerID="b294d6794d812e5edf3ef6a9e958a0e819a4d9cef0713b11e446bf4a94f417e6" exitCode=0 Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.236086 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ee779039-892b-461d-aea0-1139654037bf","Type":"ContainerDied","Data":"b294d6794d812e5edf3ef6a9e958a0e819a4d9cef0713b11e446bf4a94f417e6"} Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.236142 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ee779039-892b-461d-aea0-1139654037bf","Type":"ContainerDied","Data":"212a4b9c3abb6ca202ab959b806b51845446bbc8fe81991901884dc32161cd59"} Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.236164 4725 scope.go:117] "RemoveContainer" containerID="b294d6794d812e5edf3ef6a9e958a0e819a4d9cef0713b11e446bf4a94f417e6" Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.236298 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.236158 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.265538 4725 scope.go:117] "RemoveContainer" containerID="b294d6794d812e5edf3ef6a9e958a0e819a4d9cef0713b11e446bf4a94f417e6" Oct 14 13:34:42 crc kubenswrapper[4725]: E1014 13:34:42.277131 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b294d6794d812e5edf3ef6a9e958a0e819a4d9cef0713b11e446bf4a94f417e6\": container with ID starting with b294d6794d812e5edf3ef6a9e958a0e819a4d9cef0713b11e446bf4a94f417e6 not found: ID does not exist" containerID="b294d6794d812e5edf3ef6a9e958a0e819a4d9cef0713b11e446bf4a94f417e6" Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.277188 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b294d6794d812e5edf3ef6a9e958a0e819a4d9cef0713b11e446bf4a94f417e6"} err="failed to get container status \"b294d6794d812e5edf3ef6a9e958a0e819a4d9cef0713b11e446bf4a94f417e6\": rpc error: code = NotFound desc = could not find container \"b294d6794d812e5edf3ef6a9e958a0e819a4d9cef0713b11e446bf4a94f417e6\": container with ID starting with b294d6794d812e5edf3ef6a9e958a0e819a4d9cef0713b11e446bf4a94f417e6 not found: ID does not exist" Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.277239 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.296567 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.326181 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 13:34:42 crc kubenswrapper[4725]: E1014 13:34:42.326746 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee779039-892b-461d-aea0-1139654037bf" containerName="nova-scheduler-scheduler" Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.326773 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee779039-892b-461d-aea0-1139654037bf" containerName="nova-scheduler-scheduler" Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.327039 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee779039-892b-461d-aea0-1139654037bf" containerName="nova-scheduler-scheduler" Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.327839 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.330136 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.333080 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.373371 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9k7m\" (UniqueName: \"kubernetes.io/projected/7243f030-3d65-4fc3-bf95-7a3df406ad4d-kube-api-access-r9k7m\") pod \"nova-scheduler-0\" (UID: \"7243f030-3d65-4fc3-bf95-7a3df406ad4d\") " pod="openstack/nova-scheduler-0" Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.373539 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243f030-3d65-4fc3-bf95-7a3df406ad4d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7243f030-3d65-4fc3-bf95-7a3df406ad4d\") " pod="openstack/nova-scheduler-0" Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.373648 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7243f030-3d65-4fc3-bf95-7a3df406ad4d-config-data\") pod \"nova-scheduler-0\" (UID: \"7243f030-3d65-4fc3-bf95-7a3df406ad4d\") " pod="openstack/nova-scheduler-0" Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.475307 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9k7m\" (UniqueName: \"kubernetes.io/projected/7243f030-3d65-4fc3-bf95-7a3df406ad4d-kube-api-access-r9k7m\") pod \"nova-scheduler-0\" (UID: \"7243f030-3d65-4fc3-bf95-7a3df406ad4d\") " pod="openstack/nova-scheduler-0" Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.475597 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243f030-3d65-4fc3-bf95-7a3df406ad4d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7243f030-3d65-4fc3-bf95-7a3df406ad4d\") " pod="openstack/nova-scheduler-0" Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.475659 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7243f030-3d65-4fc3-bf95-7a3df406ad4d-config-data\") pod \"nova-scheduler-0\" (UID: \"7243f030-3d65-4fc3-bf95-7a3df406ad4d\") " pod="openstack/nova-scheduler-0" Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.479671 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7243f030-3d65-4fc3-bf95-7a3df406ad4d-config-data\") pod \"nova-scheduler-0\" (UID: \"7243f030-3d65-4fc3-bf95-7a3df406ad4d\") " pod="openstack/nova-scheduler-0" Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.480143 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243f030-3d65-4fc3-bf95-7a3df406ad4d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7243f030-3d65-4fc3-bf95-7a3df406ad4d\") " pod="openstack/nova-scheduler-0" Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.502442 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9k7m\" (UniqueName: \"kubernetes.io/projected/7243f030-3d65-4fc3-bf95-7a3df406ad4d-kube-api-access-r9k7m\") pod \"nova-scheduler-0\" (UID: \"7243f030-3d65-4fc3-bf95-7a3df406ad4d\") " pod="openstack/nova-scheduler-0" Oct 14 13:34:42 crc kubenswrapper[4725]: I1014 13:34:42.644536 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 13:34:43 crc kubenswrapper[4725]: W1014 13:34:43.173045 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7243f030_3d65_4fc3_bf95_7a3df406ad4d.slice/crio-80c077fbeebc7301629c7173d18ca7e8ba1edba027c69da0e3a3792dcb4e1492 WatchSource:0}: Error finding container 80c077fbeebc7301629c7173d18ca7e8ba1edba027c69da0e3a3792dcb4e1492: Status 404 returned error can't find the container with id 80c077fbeebc7301629c7173d18ca7e8ba1edba027c69da0e3a3792dcb4e1492 Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.181391 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.235533 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.259601 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7243f030-3d65-4fc3-bf95-7a3df406ad4d","Type":"ContainerStarted","Data":"80c077fbeebc7301629c7173d18ca7e8ba1edba027c69da0e3a3792dcb4e1492"} Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.273910 4725 generic.go:334] "Generic (PLEG): container finished" podID="741c0ed3-ec3b-4c57-92d1-eb42974f5383" containerID="8da76079f9d2dd0a6e78befd69caec483d3200bbcd6608f2530ec85695cb28a5" exitCode=0 Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.274413 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.274691 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"741c0ed3-ec3b-4c57-92d1-eb42974f5383","Type":"ContainerDied","Data":"8da76079f9d2dd0a6e78befd69caec483d3200bbcd6608f2530ec85695cb28a5"} Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.274738 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"741c0ed3-ec3b-4c57-92d1-eb42974f5383","Type":"ContainerDied","Data":"b211fee2b8556483f5d7fb1f8e6e87ed5babc3dfc2cea21467c6c8221a46b90d"} Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.274757 4725 scope.go:117] "RemoveContainer" containerID="8da76079f9d2dd0a6e78befd69caec483d3200bbcd6608f2530ec85695cb28a5" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.290607 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/741c0ed3-ec3b-4c57-92d1-eb42974f5383-config-data\") pod \"741c0ed3-ec3b-4c57-92d1-eb42974f5383\" (UID: \"741c0ed3-ec3b-4c57-92d1-eb42974f5383\") " Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.290726 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9rtz\" (UniqueName: \"kubernetes.io/projected/741c0ed3-ec3b-4c57-92d1-eb42974f5383-kube-api-access-v9rtz\") pod \"741c0ed3-ec3b-4c57-92d1-eb42974f5383\" (UID: \"741c0ed3-ec3b-4c57-92d1-eb42974f5383\") " Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.290857 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/741c0ed3-ec3b-4c57-92d1-eb42974f5383-combined-ca-bundle\") pod \"741c0ed3-ec3b-4c57-92d1-eb42974f5383\" (UID: \"741c0ed3-ec3b-4c57-92d1-eb42974f5383\") " Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.290908 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/741c0ed3-ec3b-4c57-92d1-eb42974f5383-logs\") pod \"741c0ed3-ec3b-4c57-92d1-eb42974f5383\" (UID: \"741c0ed3-ec3b-4c57-92d1-eb42974f5383\") " Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.292086 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/741c0ed3-ec3b-4c57-92d1-eb42974f5383-logs" (OuterVolumeSpecName: "logs") pod "741c0ed3-ec3b-4c57-92d1-eb42974f5383" (UID: "741c0ed3-ec3b-4c57-92d1-eb42974f5383"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.296762 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/741c0ed3-ec3b-4c57-92d1-eb42974f5383-kube-api-access-v9rtz" (OuterVolumeSpecName: "kube-api-access-v9rtz") pod "741c0ed3-ec3b-4c57-92d1-eb42974f5383" (UID: "741c0ed3-ec3b-4c57-92d1-eb42974f5383"). InnerVolumeSpecName "kube-api-access-v9rtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.307871 4725 scope.go:117] "RemoveContainer" containerID="95ab5842a0f82490d77380f5d1a33d14f89d5eed63745acc96766e0869097331" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.327343 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/741c0ed3-ec3b-4c57-92d1-eb42974f5383-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "741c0ed3-ec3b-4c57-92d1-eb42974f5383" (UID: "741c0ed3-ec3b-4c57-92d1-eb42974f5383"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.344342 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/741c0ed3-ec3b-4c57-92d1-eb42974f5383-config-data" (OuterVolumeSpecName: "config-data") pod "741c0ed3-ec3b-4c57-92d1-eb42974f5383" (UID: "741c0ed3-ec3b-4c57-92d1-eb42974f5383"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.346600 4725 scope.go:117] "RemoveContainer" containerID="8da76079f9d2dd0a6e78befd69caec483d3200bbcd6608f2530ec85695cb28a5" Oct 14 13:34:43 crc kubenswrapper[4725]: E1014 13:34:43.347251 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8da76079f9d2dd0a6e78befd69caec483d3200bbcd6608f2530ec85695cb28a5\": container with ID starting with 8da76079f9d2dd0a6e78befd69caec483d3200bbcd6608f2530ec85695cb28a5 not found: ID does not exist" containerID="8da76079f9d2dd0a6e78befd69caec483d3200bbcd6608f2530ec85695cb28a5" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.347294 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8da76079f9d2dd0a6e78befd69caec483d3200bbcd6608f2530ec85695cb28a5"} err="failed to get container status \"8da76079f9d2dd0a6e78befd69caec483d3200bbcd6608f2530ec85695cb28a5\": rpc error: code = NotFound desc = could not find container \"8da76079f9d2dd0a6e78befd69caec483d3200bbcd6608f2530ec85695cb28a5\": container with ID starting with 8da76079f9d2dd0a6e78befd69caec483d3200bbcd6608f2530ec85695cb28a5 not found: ID does not exist" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.347322 4725 scope.go:117] "RemoveContainer" containerID="95ab5842a0f82490d77380f5d1a33d14f89d5eed63745acc96766e0869097331" Oct 14 13:34:43 crc kubenswrapper[4725]: E1014 13:34:43.347586 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95ab5842a0f82490d77380f5d1a33d14f89d5eed63745acc96766e0869097331\": container with ID starting with 95ab5842a0f82490d77380f5d1a33d14f89d5eed63745acc96766e0869097331 not found: ID does not exist" containerID="95ab5842a0f82490d77380f5d1a33d14f89d5eed63745acc96766e0869097331" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.347611 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95ab5842a0f82490d77380f5d1a33d14f89d5eed63745acc96766e0869097331"} err="failed to get container status \"95ab5842a0f82490d77380f5d1a33d14f89d5eed63745acc96766e0869097331\": rpc error: code = NotFound desc = could not find container \"95ab5842a0f82490d77380f5d1a33d14f89d5eed63745acc96766e0869097331\": container with ID starting with 95ab5842a0f82490d77380f5d1a33d14f89d5eed63745acc96766e0869097331 not found: ID does not exist" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.393803 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/741c0ed3-ec3b-4c57-92d1-eb42974f5383-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.393837 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9rtz\" (UniqueName: \"kubernetes.io/projected/741c0ed3-ec3b-4c57-92d1-eb42974f5383-kube-api-access-v9rtz\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.393851 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/741c0ed3-ec3b-4c57-92d1-eb42974f5383-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.393862 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/741c0ed3-ec3b-4c57-92d1-eb42974f5383-logs\") on node \"crc\" DevicePath \"\"" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.594561 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.596733 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.632664 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.641435 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.686864 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 14 13:34:43 crc kubenswrapper[4725]: E1014 13:34:43.687407 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="741c0ed3-ec3b-4c57-92d1-eb42974f5383" containerName="nova-api-log" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.687434 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="741c0ed3-ec3b-4c57-92d1-eb42974f5383" containerName="nova-api-log" Oct 14 13:34:43 crc kubenswrapper[4725]: E1014 13:34:43.687476 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="741c0ed3-ec3b-4c57-92d1-eb42974f5383" containerName="nova-api-api" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.687487 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="741c0ed3-ec3b-4c57-92d1-eb42974f5383" containerName="nova-api-api" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.687707 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="741c0ed3-ec3b-4c57-92d1-eb42974f5383" containerName="nova-api-api" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.687750 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="741c0ed3-ec3b-4c57-92d1-eb42974f5383" containerName="nova-api-log" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.689394 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.693897 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.716736 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.808322 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f03bc7c0-58f8-4a5c-bfcb-47af11705529-logs\") pod \"nova-api-0\" (UID: \"f03bc7c0-58f8-4a5c-bfcb-47af11705529\") " pod="openstack/nova-api-0" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.808366 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03bc7c0-58f8-4a5c-bfcb-47af11705529-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f03bc7c0-58f8-4a5c-bfcb-47af11705529\") " pod="openstack/nova-api-0" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.808402 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdn6p\" (UniqueName: \"kubernetes.io/projected/f03bc7c0-58f8-4a5c-bfcb-47af11705529-kube-api-access-mdn6p\") pod \"nova-api-0\" (UID: \"f03bc7c0-58f8-4a5c-bfcb-47af11705529\") " pod="openstack/nova-api-0" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.808424 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f03bc7c0-58f8-4a5c-bfcb-47af11705529-config-data\") pod \"nova-api-0\" (UID: \"f03bc7c0-58f8-4a5c-bfcb-47af11705529\") " pod="openstack/nova-api-0" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.910167 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f03bc7c0-58f8-4a5c-bfcb-47af11705529-logs\") pod \"nova-api-0\" (UID: \"f03bc7c0-58f8-4a5c-bfcb-47af11705529\") " pod="openstack/nova-api-0" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.910212 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03bc7c0-58f8-4a5c-bfcb-47af11705529-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f03bc7c0-58f8-4a5c-bfcb-47af11705529\") " pod="openstack/nova-api-0" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.910233 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdn6p\" (UniqueName: \"kubernetes.io/projected/f03bc7c0-58f8-4a5c-bfcb-47af11705529-kube-api-access-mdn6p\") pod \"nova-api-0\" (UID: \"f03bc7c0-58f8-4a5c-bfcb-47af11705529\") " pod="openstack/nova-api-0" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.910265 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f03bc7c0-58f8-4a5c-bfcb-47af11705529-config-data\") pod \"nova-api-0\" (UID: \"f03bc7c0-58f8-4a5c-bfcb-47af11705529\") " pod="openstack/nova-api-0" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.910665 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f03bc7c0-58f8-4a5c-bfcb-47af11705529-logs\") pod \"nova-api-0\" (UID: \"f03bc7c0-58f8-4a5c-bfcb-47af11705529\") " pod="openstack/nova-api-0" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.915858 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.918584 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03bc7c0-58f8-4a5c-bfcb-47af11705529-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f03bc7c0-58f8-4a5c-bfcb-47af11705529\") " pod="openstack/nova-api-0" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.923973 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f03bc7c0-58f8-4a5c-bfcb-47af11705529-config-data\") pod \"nova-api-0\" (UID: \"f03bc7c0-58f8-4a5c-bfcb-47af11705529\") " pod="openstack/nova-api-0" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.928001 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdn6p\" (UniqueName: \"kubernetes.io/projected/f03bc7c0-58f8-4a5c-bfcb-47af11705529-kube-api-access-mdn6p\") pod \"nova-api-0\" (UID: \"f03bc7c0-58f8-4a5c-bfcb-47af11705529\") " pod="openstack/nova-api-0" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.941058 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="741c0ed3-ec3b-4c57-92d1-eb42974f5383" path="/var/lib/kubelet/pods/741c0ed3-ec3b-4c57-92d1-eb42974f5383/volumes" Oct 14 13:34:43 crc kubenswrapper[4725]: I1014 13:34:43.943408 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee779039-892b-461d-aea0-1139654037bf" path="/var/lib/kubelet/pods/ee779039-892b-461d-aea0-1139654037bf/volumes" Oct 14 13:34:44 crc kubenswrapper[4725]: I1014 13:34:44.015961 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 13:34:44 crc kubenswrapper[4725]: I1014 13:34:44.295904 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7243f030-3d65-4fc3-bf95-7a3df406ad4d","Type":"ContainerStarted","Data":"47544b09f5a1bc3f6f101fc538ff8f6631dd547fea09ae1357b6e94acbea89c8"} Oct 14 13:34:44 crc kubenswrapper[4725]: I1014 13:34:44.473410 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.473393405 podStartE2EDuration="2.473393405s" podCreationTimestamp="2025-10-14 13:34:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:34:44.318198803 +0000 UTC m=+1201.166633632" watchObservedRunningTime="2025-10-14 13:34:44.473393405 +0000 UTC m=+1201.321828214" Oct 14 13:34:44 crc kubenswrapper[4725]: I1014 13:34:44.477975 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 13:34:44 crc kubenswrapper[4725]: W1014 13:34:44.482685 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf03bc7c0_58f8_4a5c_bfcb_47af11705529.slice/crio-2043d1b8182473812597d191f9f133086e1c12ce068e6469e418967bbca2194b WatchSource:0}: Error finding container 2043d1b8182473812597d191f9f133086e1c12ce068e6469e418967bbca2194b: Status 404 returned error can't find the container with id 2043d1b8182473812597d191f9f133086e1c12ce068e6469e418967bbca2194b Oct 14 13:34:45 crc kubenswrapper[4725]: I1014 13:34:45.313050 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f03bc7c0-58f8-4a5c-bfcb-47af11705529","Type":"ContainerStarted","Data":"b403442bad72241016c1d86071512fd069ba31d91aef0467f230fc611ced04d8"} Oct 14 13:34:45 crc kubenswrapper[4725]: I1014 13:34:45.313521 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f03bc7c0-58f8-4a5c-bfcb-47af11705529","Type":"ContainerStarted","Data":"ef7d5d81e775ef6b51a79f5f3fcb8080959aa7e92a09626ed5a6abbccc42f6e2"} Oct 14 13:34:45 crc kubenswrapper[4725]: I1014 13:34:45.313947 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f03bc7c0-58f8-4a5c-bfcb-47af11705529","Type":"ContainerStarted","Data":"2043d1b8182473812597d191f9f133086e1c12ce068e6469e418967bbca2194b"} Oct 14 13:34:47 crc kubenswrapper[4725]: I1014 13:34:47.645591 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 14 13:34:48 crc kubenswrapper[4725]: I1014 13:34:48.592335 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 14 13:34:48 crc kubenswrapper[4725]: I1014 13:34:48.592396 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 14 13:34:49 crc kubenswrapper[4725]: I1014 13:34:49.612729 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a2f2aea4-7075-45b5-94c7-73d1b181beb8" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 13:34:49 crc kubenswrapper[4725]: I1014 13:34:49.612724 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a2f2aea4-7075-45b5-94c7-73d1b181beb8" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 13:34:49 crc kubenswrapper[4725]: I1014 13:34:49.686383 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Oct 14 13:34:49 crc kubenswrapper[4725]: I1014 13:34:49.716758 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=6.716728101 podStartE2EDuration="6.716728101s" podCreationTimestamp="2025-10-14 13:34:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:34:45.335808745 +0000 UTC m=+1202.184243554" watchObservedRunningTime="2025-10-14 13:34:49.716728101 +0000 UTC m=+1206.565162940" Oct 14 13:34:52 crc kubenswrapper[4725]: I1014 13:34:52.645696 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 14 13:34:52 crc kubenswrapper[4725]: I1014 13:34:52.687708 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 14 13:34:53 crc kubenswrapper[4725]: I1014 13:34:53.444983 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 14 13:34:54 crc kubenswrapper[4725]: I1014 13:34:54.020640 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 13:34:54 crc kubenswrapper[4725]: I1014 13:34:54.020690 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 13:34:55 crc kubenswrapper[4725]: I1014 13:34:55.103833 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f03bc7c0-58f8-4a5c-bfcb-47af11705529" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 13:34:55 crc kubenswrapper[4725]: I1014 13:34:55.104124 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f03bc7c0-58f8-4a5c-bfcb-47af11705529" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 14 13:34:58 crc kubenswrapper[4725]: I1014 13:34:58.598619 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 14 13:34:58 crc kubenswrapper[4725]: I1014 13:34:58.598829 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 14 13:34:58 crc kubenswrapper[4725]: I1014 13:34:58.607067 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 14 13:34:59 crc kubenswrapper[4725]: I1014 13:34:59.482382 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 14 13:35:03 crc kubenswrapper[4725]: I1014 13:35:03.492633 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:35:03 crc kubenswrapper[4725]: I1014 13:35:03.512780 4725 generic.go:334] "Generic (PLEG): container finished" podID="0d602b0f-b981-4160-8ceb-cf6509cb34b6" containerID="62f14b0a50026a0cbbe3dc4f1ff7faa1b6f4b345f666ea973771b20c9af8f38e" exitCode=137 Oct 14 13:35:03 crc kubenswrapper[4725]: I1014 13:35:03.512842 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:35:03 crc kubenswrapper[4725]: I1014 13:35:03.512861 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0d602b0f-b981-4160-8ceb-cf6509cb34b6","Type":"ContainerDied","Data":"62f14b0a50026a0cbbe3dc4f1ff7faa1b6f4b345f666ea973771b20c9af8f38e"} Oct 14 13:35:03 crc kubenswrapper[4725]: I1014 13:35:03.512924 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0d602b0f-b981-4160-8ceb-cf6509cb34b6","Type":"ContainerDied","Data":"1aea7cdd2cbe532fb1c044b0a30a6f1688e878e4b51624c4ac7ab0551d9797fa"} Oct 14 13:35:03 crc kubenswrapper[4725]: I1014 13:35:03.512980 4725 scope.go:117] "RemoveContainer" containerID="62f14b0a50026a0cbbe3dc4f1ff7faa1b6f4b345f666ea973771b20c9af8f38e" Oct 14 13:35:03 crc kubenswrapper[4725]: I1014 13:35:03.537143 4725 scope.go:117] "RemoveContainer" containerID="62f14b0a50026a0cbbe3dc4f1ff7faa1b6f4b345f666ea973771b20c9af8f38e" Oct 14 13:35:03 crc kubenswrapper[4725]: E1014 13:35:03.537742 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62f14b0a50026a0cbbe3dc4f1ff7faa1b6f4b345f666ea973771b20c9af8f38e\": container with ID starting with 62f14b0a50026a0cbbe3dc4f1ff7faa1b6f4b345f666ea973771b20c9af8f38e not found: ID does not exist" containerID="62f14b0a50026a0cbbe3dc4f1ff7faa1b6f4b345f666ea973771b20c9af8f38e" Oct 14 13:35:03 crc kubenswrapper[4725]: I1014 13:35:03.537799 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62f14b0a50026a0cbbe3dc4f1ff7faa1b6f4b345f666ea973771b20c9af8f38e"} err="failed to get container status \"62f14b0a50026a0cbbe3dc4f1ff7faa1b6f4b345f666ea973771b20c9af8f38e\": rpc error: code = NotFound desc = could not find container \"62f14b0a50026a0cbbe3dc4f1ff7faa1b6f4b345f666ea973771b20c9af8f38e\": container with ID starting with 62f14b0a50026a0cbbe3dc4f1ff7faa1b6f4b345f666ea973771b20c9af8f38e not found: ID does not exist" Oct 14 13:35:03 crc kubenswrapper[4725]: I1014 13:35:03.620110 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d602b0f-b981-4160-8ceb-cf6509cb34b6-combined-ca-bundle\") pod \"0d602b0f-b981-4160-8ceb-cf6509cb34b6\" (UID: \"0d602b0f-b981-4160-8ceb-cf6509cb34b6\") " Oct 14 13:35:03 crc kubenswrapper[4725]: I1014 13:35:03.620188 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d602b0f-b981-4160-8ceb-cf6509cb34b6-config-data\") pod \"0d602b0f-b981-4160-8ceb-cf6509cb34b6\" (UID: \"0d602b0f-b981-4160-8ceb-cf6509cb34b6\") " Oct 14 13:35:03 crc kubenswrapper[4725]: I1014 13:35:03.620335 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4zdl\" (UniqueName: \"kubernetes.io/projected/0d602b0f-b981-4160-8ceb-cf6509cb34b6-kube-api-access-p4zdl\") pod \"0d602b0f-b981-4160-8ceb-cf6509cb34b6\" (UID: \"0d602b0f-b981-4160-8ceb-cf6509cb34b6\") " Oct 14 13:35:03 crc kubenswrapper[4725]: I1014 13:35:03.629817 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d602b0f-b981-4160-8ceb-cf6509cb34b6-kube-api-access-p4zdl" (OuterVolumeSpecName: "kube-api-access-p4zdl") pod "0d602b0f-b981-4160-8ceb-cf6509cb34b6" (UID: "0d602b0f-b981-4160-8ceb-cf6509cb34b6"). InnerVolumeSpecName "kube-api-access-p4zdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:35:03 crc kubenswrapper[4725]: I1014 13:35:03.657783 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d602b0f-b981-4160-8ceb-cf6509cb34b6-config-data" (OuterVolumeSpecName: "config-data") pod "0d602b0f-b981-4160-8ceb-cf6509cb34b6" (UID: "0d602b0f-b981-4160-8ceb-cf6509cb34b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:35:03 crc kubenswrapper[4725]: I1014 13:35:03.659952 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d602b0f-b981-4160-8ceb-cf6509cb34b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d602b0f-b981-4160-8ceb-cf6509cb34b6" (UID: "0d602b0f-b981-4160-8ceb-cf6509cb34b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:35:03 crc kubenswrapper[4725]: I1014 13:35:03.722639 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d602b0f-b981-4160-8ceb-cf6509cb34b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:03 crc kubenswrapper[4725]: I1014 13:35:03.722690 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d602b0f-b981-4160-8ceb-cf6509cb34b6-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:03 crc kubenswrapper[4725]: I1014 13:35:03.722709 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4zdl\" (UniqueName: \"kubernetes.io/projected/0d602b0f-b981-4160-8ceb-cf6509cb34b6-kube-api-access-p4zdl\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:03 crc kubenswrapper[4725]: I1014 13:35:03.860111 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 13:35:03 crc kubenswrapper[4725]: I1014 13:35:03.869364 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 13:35:03 crc kubenswrapper[4725]: I1014 13:35:03.881828 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 13:35:03 crc kubenswrapper[4725]: E1014 13:35:03.882314 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d602b0f-b981-4160-8ceb-cf6509cb34b6" containerName="nova-cell1-novncproxy-novncproxy" Oct 14 13:35:03 crc kubenswrapper[4725]: I1014 13:35:03.882338 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d602b0f-b981-4160-8ceb-cf6509cb34b6" containerName="nova-cell1-novncproxy-novncproxy" Oct 14 13:35:03 crc kubenswrapper[4725]: I1014 13:35:03.884989 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d602b0f-b981-4160-8ceb-cf6509cb34b6" containerName="nova-cell1-novncproxy-novncproxy" Oct 14 13:35:03 crc kubenswrapper[4725]: I1014 13:35:03.885852 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:35:03 crc kubenswrapper[4725]: I1014 13:35:03.889522 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Oct 14 13:35:03 crc kubenswrapper[4725]: I1014 13:35:03.889944 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Oct 14 13:35:03 crc kubenswrapper[4725]: I1014 13:35:03.894180 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 13:35:03 crc kubenswrapper[4725]: I1014 13:35:03.896909 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 14 13:35:03 crc kubenswrapper[4725]: I1014 13:35:03.925432 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb3ef2cb-5705-496c-9419-7609209d830d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bb3ef2cb-5705-496c-9419-7609209d830d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:35:03 crc kubenswrapper[4725]: I1014 13:35:03.925548 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb3ef2cb-5705-496c-9419-7609209d830d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bb3ef2cb-5705-496c-9419-7609209d830d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:35:03 crc kubenswrapper[4725]: I1014 13:35:03.925600 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx7vn\" (UniqueName: \"kubernetes.io/projected/bb3ef2cb-5705-496c-9419-7609209d830d-kube-api-access-tx7vn\") pod \"nova-cell1-novncproxy-0\" (UID: \"bb3ef2cb-5705-496c-9419-7609209d830d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:35:03 crc kubenswrapper[4725]: I1014 13:35:03.925643 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3ef2cb-5705-496c-9419-7609209d830d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bb3ef2cb-5705-496c-9419-7609209d830d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:35:03 crc kubenswrapper[4725]: I1014 13:35:03.925774 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3ef2cb-5705-496c-9419-7609209d830d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bb3ef2cb-5705-496c-9419-7609209d830d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:35:03 crc kubenswrapper[4725]: I1014 13:35:03.938454 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d602b0f-b981-4160-8ceb-cf6509cb34b6" path="/var/lib/kubelet/pods/0d602b0f-b981-4160-8ceb-cf6509cb34b6/volumes" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.020609 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.020688 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.021249 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.021274 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.024618 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.024786 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.027572 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3ef2cb-5705-496c-9419-7609209d830d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bb3ef2cb-5705-496c-9419-7609209d830d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.027630 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb3ef2cb-5705-496c-9419-7609209d830d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bb3ef2cb-5705-496c-9419-7609209d830d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.027724 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb3ef2cb-5705-496c-9419-7609209d830d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bb3ef2cb-5705-496c-9419-7609209d830d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.027825 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx7vn\" (UniqueName: \"kubernetes.io/projected/bb3ef2cb-5705-496c-9419-7609209d830d-kube-api-access-tx7vn\") pod \"nova-cell1-novncproxy-0\" (UID: \"bb3ef2cb-5705-496c-9419-7609209d830d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.027858 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3ef2cb-5705-496c-9419-7609209d830d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bb3ef2cb-5705-496c-9419-7609209d830d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.031372 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3ef2cb-5705-496c-9419-7609209d830d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bb3ef2cb-5705-496c-9419-7609209d830d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.031741 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb3ef2cb-5705-496c-9419-7609209d830d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"bb3ef2cb-5705-496c-9419-7609209d830d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.031879 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3ef2cb-5705-496c-9419-7609209d830d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"bb3ef2cb-5705-496c-9419-7609209d830d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.034361 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb3ef2cb-5705-496c-9419-7609209d830d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"bb3ef2cb-5705-496c-9419-7609209d830d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.046928 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx7vn\" (UniqueName: \"kubernetes.io/projected/bb3ef2cb-5705-496c-9419-7609209d830d-kube-api-access-tx7vn\") pod \"nova-cell1-novncproxy-0\" (UID: \"bb3ef2cb-5705-496c-9419-7609209d830d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.204839 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-jllfk"] Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.206869 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-jllfk" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.210045 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.249894 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-jllfk"] Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.341360 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dfe26ed5-d004-4505-8bfc-f72e530f121e-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-jllfk\" (UID: \"dfe26ed5-d004-4505-8bfc-f72e530f121e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-jllfk" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.341506 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dfe26ed5-d004-4505-8bfc-f72e530f121e-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-jllfk\" (UID: \"dfe26ed5-d004-4505-8bfc-f72e530f121e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-jllfk" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.341578 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmh22\" (UniqueName: \"kubernetes.io/projected/dfe26ed5-d004-4505-8bfc-f72e530f121e-kube-api-access-hmh22\") pod \"dnsmasq-dns-59cf4bdb65-jllfk\" (UID: \"dfe26ed5-d004-4505-8bfc-f72e530f121e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-jllfk" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.341672 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dfe26ed5-d004-4505-8bfc-f72e530f121e-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-jllfk\" (UID: \"dfe26ed5-d004-4505-8bfc-f72e530f121e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-jllfk" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.341736 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfe26ed5-d004-4505-8bfc-f72e530f121e-config\") pod \"dnsmasq-dns-59cf4bdb65-jllfk\" (UID: \"dfe26ed5-d004-4505-8bfc-f72e530f121e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-jllfk" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.341833 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfe26ed5-d004-4505-8bfc-f72e530f121e-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-jllfk\" (UID: \"dfe26ed5-d004-4505-8bfc-f72e530f121e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-jllfk" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.447692 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dfe26ed5-d004-4505-8bfc-f72e530f121e-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-jllfk\" (UID: \"dfe26ed5-d004-4505-8bfc-f72e530f121e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-jllfk" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.448046 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmh22\" (UniqueName: \"kubernetes.io/projected/dfe26ed5-d004-4505-8bfc-f72e530f121e-kube-api-access-hmh22\") pod \"dnsmasq-dns-59cf4bdb65-jllfk\" (UID: \"dfe26ed5-d004-4505-8bfc-f72e530f121e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-jllfk" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.448135 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dfe26ed5-d004-4505-8bfc-f72e530f121e-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-jllfk\" (UID: \"dfe26ed5-d004-4505-8bfc-f72e530f121e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-jllfk" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.448187 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfe26ed5-d004-4505-8bfc-f72e530f121e-config\") pod \"dnsmasq-dns-59cf4bdb65-jllfk\" (UID: \"dfe26ed5-d004-4505-8bfc-f72e530f121e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-jllfk" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.448256 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfe26ed5-d004-4505-8bfc-f72e530f121e-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-jllfk\" (UID: \"dfe26ed5-d004-4505-8bfc-f72e530f121e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-jllfk" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.448306 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dfe26ed5-d004-4505-8bfc-f72e530f121e-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-jllfk\" (UID: \"dfe26ed5-d004-4505-8bfc-f72e530f121e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-jllfk" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.449592 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dfe26ed5-d004-4505-8bfc-f72e530f121e-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-jllfk\" (UID: \"dfe26ed5-d004-4505-8bfc-f72e530f121e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-jllfk" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.449658 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dfe26ed5-d004-4505-8bfc-f72e530f121e-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-jllfk\" (UID: \"dfe26ed5-d004-4505-8bfc-f72e530f121e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-jllfk" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.450721 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfe26ed5-d004-4505-8bfc-f72e530f121e-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-jllfk\" (UID: \"dfe26ed5-d004-4505-8bfc-f72e530f121e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-jllfk" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.451205 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfe26ed5-d004-4505-8bfc-f72e530f121e-config\") pod \"dnsmasq-dns-59cf4bdb65-jllfk\" (UID: \"dfe26ed5-d004-4505-8bfc-f72e530f121e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-jllfk" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.451808 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dfe26ed5-d004-4505-8bfc-f72e530f121e-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-jllfk\" (UID: \"dfe26ed5-d004-4505-8bfc-f72e530f121e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-jllfk" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.477106 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmh22\" (UniqueName: \"kubernetes.io/projected/dfe26ed5-d004-4505-8bfc-f72e530f121e-kube-api-access-hmh22\") pod \"dnsmasq-dns-59cf4bdb65-jllfk\" (UID: \"dfe26ed5-d004-4505-8bfc-f72e530f121e\") " pod="openstack/dnsmasq-dns-59cf4bdb65-jllfk" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.625924 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-jllfk" Oct 14 13:35:04 crc kubenswrapper[4725]: I1014 13:35:04.764043 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 14 13:35:05 crc kubenswrapper[4725]: I1014 13:35:05.109334 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-jllfk"] Oct 14 13:35:05 crc kubenswrapper[4725]: W1014 13:35:05.111097 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddfe26ed5_d004_4505_8bfc_f72e530f121e.slice/crio-618222870cf763bfd67ee8ed41704ad8fe8587d8dfff17be5978ef07c571ad54 WatchSource:0}: Error finding container 618222870cf763bfd67ee8ed41704ad8fe8587d8dfff17be5978ef07c571ad54: Status 404 returned error can't find the container with id 618222870cf763bfd67ee8ed41704ad8fe8587d8dfff17be5978ef07c571ad54 Oct 14 13:35:05 crc kubenswrapper[4725]: I1014 13:35:05.548195 4725 generic.go:334] "Generic (PLEG): container finished" podID="dfe26ed5-d004-4505-8bfc-f72e530f121e" containerID="6e30c31de2a42ce4b3042cf696bf6c99fe7ad3cab41da45995f8bc4e1bdcf1d1" exitCode=0 Oct 14 13:35:05 crc kubenswrapper[4725]: I1014 13:35:05.548308 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-jllfk" event={"ID":"dfe26ed5-d004-4505-8bfc-f72e530f121e","Type":"ContainerDied","Data":"6e30c31de2a42ce4b3042cf696bf6c99fe7ad3cab41da45995f8bc4e1bdcf1d1"} Oct 14 13:35:05 crc kubenswrapper[4725]: I1014 13:35:05.548337 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-jllfk" event={"ID":"dfe26ed5-d004-4505-8bfc-f72e530f121e","Type":"ContainerStarted","Data":"618222870cf763bfd67ee8ed41704ad8fe8587d8dfff17be5978ef07c571ad54"} Oct 14 13:35:05 crc kubenswrapper[4725]: I1014 13:35:05.550360 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bb3ef2cb-5705-496c-9419-7609209d830d","Type":"ContainerStarted","Data":"a815c6e79b310432eb0c33d98e47d4434c729ae2921e269aab839637ddf4c449"} Oct 14 13:35:05 crc kubenswrapper[4725]: I1014 13:35:05.550408 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"bb3ef2cb-5705-496c-9419-7609209d830d","Type":"ContainerStarted","Data":"04597ebe196a1b0730c36579bd5fc4421c18d7068a76e91cdbe51b58585dfb20"} Oct 14 13:35:05 crc kubenswrapper[4725]: I1014 13:35:05.606842 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.606824176 podStartE2EDuration="2.606824176s" podCreationTimestamp="2025-10-14 13:35:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:35:05.606341673 +0000 UTC m=+1222.454776492" watchObservedRunningTime="2025-10-14 13:35:05.606824176 +0000 UTC m=+1222.455258985" Oct 14 13:35:06 crc kubenswrapper[4725]: I1014 13:35:06.150885 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 13:35:06 crc kubenswrapper[4725]: I1014 13:35:06.151200 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="32dec6da-c9a2-490c-8b50-14e95259066b" containerName="ceilometer-central-agent" containerID="cri-o://919957d22e96b2058366f667070783ef3829c109c09e419921da3d105ead154f" gracePeriod=30 Oct 14 13:35:06 crc kubenswrapper[4725]: I1014 13:35:06.151275 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="32dec6da-c9a2-490c-8b50-14e95259066b" containerName="sg-core" containerID="cri-o://1ea02786c15b34ed405c1b92534d523e279490cd00c7199e4b8c7a9eaa2498d1" gracePeriod=30 Oct 14 13:35:06 crc kubenswrapper[4725]: I1014 13:35:06.151324 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="32dec6da-c9a2-490c-8b50-14e95259066b" containerName="ceilometer-notification-agent" containerID="cri-o://05ff8112b4f19531b2b9e2569e78cef60207d08e5ea7372bec84911b6d14bbab" gracePeriod=30 Oct 14 13:35:06 crc kubenswrapper[4725]: I1014 13:35:06.151300 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="32dec6da-c9a2-490c-8b50-14e95259066b" containerName="proxy-httpd" containerID="cri-o://c86368d69a2defa52913b87b2284e72895e58adcf4c9a3c7481745237671412c" gracePeriod=30 Oct 14 13:35:06 crc kubenswrapper[4725]: I1014 13:35:06.362639 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 14 13:35:06 crc kubenswrapper[4725]: I1014 13:35:06.577275 4725 generic.go:334] "Generic (PLEG): container finished" podID="32dec6da-c9a2-490c-8b50-14e95259066b" containerID="c86368d69a2defa52913b87b2284e72895e58adcf4c9a3c7481745237671412c" exitCode=0 Oct 14 13:35:06 crc kubenswrapper[4725]: I1014 13:35:06.577428 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32dec6da-c9a2-490c-8b50-14e95259066b","Type":"ContainerDied","Data":"c86368d69a2defa52913b87b2284e72895e58adcf4c9a3c7481745237671412c"} Oct 14 13:35:06 crc kubenswrapper[4725]: I1014 13:35:06.577488 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32dec6da-c9a2-490c-8b50-14e95259066b","Type":"ContainerDied","Data":"1ea02786c15b34ed405c1b92534d523e279490cd00c7199e4b8c7a9eaa2498d1"} Oct 14 13:35:06 crc kubenswrapper[4725]: I1014 13:35:06.577440 4725 generic.go:334] "Generic (PLEG): container finished" podID="32dec6da-c9a2-490c-8b50-14e95259066b" containerID="1ea02786c15b34ed405c1b92534d523e279490cd00c7199e4b8c7a9eaa2498d1" exitCode=2 Oct 14 13:35:06 crc kubenswrapper[4725]: I1014 13:35:06.580779 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-jllfk" event={"ID":"dfe26ed5-d004-4505-8bfc-f72e530f121e","Type":"ContainerStarted","Data":"6f170435a0c4ba96604923f83dc198318b3d4834f0b531dfc900c739a7c2877a"} Oct 14 13:35:06 crc kubenswrapper[4725]: I1014 13:35:06.580956 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f03bc7c0-58f8-4a5c-bfcb-47af11705529" containerName="nova-api-log" containerID="cri-o://ef7d5d81e775ef6b51a79f5f3fcb8080959aa7e92a09626ed5a6abbccc42f6e2" gracePeriod=30 Oct 14 13:35:06 crc kubenswrapper[4725]: I1014 13:35:06.581021 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f03bc7c0-58f8-4a5c-bfcb-47af11705529" containerName="nova-api-api" containerID="cri-o://b403442bad72241016c1d86071512fd069ba31d91aef0467f230fc611ced04d8" gracePeriod=30 Oct 14 13:35:06 crc kubenswrapper[4725]: I1014 13:35:06.605867 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59cf4bdb65-jllfk" podStartSLOduration=2.605849895 podStartE2EDuration="2.605849895s" podCreationTimestamp="2025-10-14 13:35:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:35:06.605016731 +0000 UTC m=+1223.453451570" watchObservedRunningTime="2025-10-14 13:35:06.605849895 +0000 UTC m=+1223.454284714" Oct 14 13:35:07 crc kubenswrapper[4725]: I1014 13:35:07.594205 4725 generic.go:334] "Generic (PLEG): container finished" podID="f03bc7c0-58f8-4a5c-bfcb-47af11705529" containerID="ef7d5d81e775ef6b51a79f5f3fcb8080959aa7e92a09626ed5a6abbccc42f6e2" exitCode=143 Oct 14 13:35:07 crc kubenswrapper[4725]: I1014 13:35:07.594275 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f03bc7c0-58f8-4a5c-bfcb-47af11705529","Type":"ContainerDied","Data":"ef7d5d81e775ef6b51a79f5f3fcb8080959aa7e92a09626ed5a6abbccc42f6e2"} Oct 14 13:35:07 crc kubenswrapper[4725]: I1014 13:35:07.596895 4725 generic.go:334] "Generic (PLEG): container finished" podID="32dec6da-c9a2-490c-8b50-14e95259066b" containerID="919957d22e96b2058366f667070783ef3829c109c09e419921da3d105ead154f" exitCode=0 Oct 14 13:35:07 crc kubenswrapper[4725]: I1014 13:35:07.597766 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32dec6da-c9a2-490c-8b50-14e95259066b","Type":"ContainerDied","Data":"919957d22e96b2058366f667070783ef3829c109c09e419921da3d105ead154f"} Oct 14 13:35:07 crc kubenswrapper[4725]: I1014 13:35:07.597802 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59cf4bdb65-jllfk" Oct 14 13:35:09 crc kubenswrapper[4725]: I1014 13:35:09.210830 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.211970 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.262031 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdn6p\" (UniqueName: \"kubernetes.io/projected/f03bc7c0-58f8-4a5c-bfcb-47af11705529-kube-api-access-mdn6p\") pod \"f03bc7c0-58f8-4a5c-bfcb-47af11705529\" (UID: \"f03bc7c0-58f8-4a5c-bfcb-47af11705529\") " Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.269865 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f03bc7c0-58f8-4a5c-bfcb-47af11705529-kube-api-access-mdn6p" (OuterVolumeSpecName: "kube-api-access-mdn6p") pod "f03bc7c0-58f8-4a5c-bfcb-47af11705529" (UID: "f03bc7c0-58f8-4a5c-bfcb-47af11705529"). InnerVolumeSpecName "kube-api-access-mdn6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.363824 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03bc7c0-58f8-4a5c-bfcb-47af11705529-combined-ca-bundle\") pod \"f03bc7c0-58f8-4a5c-bfcb-47af11705529\" (UID: \"f03bc7c0-58f8-4a5c-bfcb-47af11705529\") " Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.364205 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f03bc7c0-58f8-4a5c-bfcb-47af11705529-config-data\") pod \"f03bc7c0-58f8-4a5c-bfcb-47af11705529\" (UID: \"f03bc7c0-58f8-4a5c-bfcb-47af11705529\") " Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.364399 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f03bc7c0-58f8-4a5c-bfcb-47af11705529-logs\") pod \"f03bc7c0-58f8-4a5c-bfcb-47af11705529\" (UID: \"f03bc7c0-58f8-4a5c-bfcb-47af11705529\") " Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.365349 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdn6p\" (UniqueName: \"kubernetes.io/projected/f03bc7c0-58f8-4a5c-bfcb-47af11705529-kube-api-access-mdn6p\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.365332 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f03bc7c0-58f8-4a5c-bfcb-47af11705529-logs" (OuterVolumeSpecName: "logs") pod "f03bc7c0-58f8-4a5c-bfcb-47af11705529" (UID: "f03bc7c0-58f8-4a5c-bfcb-47af11705529"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.394773 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f03bc7c0-58f8-4a5c-bfcb-47af11705529-config-data" (OuterVolumeSpecName: "config-data") pod "f03bc7c0-58f8-4a5c-bfcb-47af11705529" (UID: "f03bc7c0-58f8-4a5c-bfcb-47af11705529"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.404254 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f03bc7c0-58f8-4a5c-bfcb-47af11705529-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f03bc7c0-58f8-4a5c-bfcb-47af11705529" (UID: "f03bc7c0-58f8-4a5c-bfcb-47af11705529"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.466766 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03bc7c0-58f8-4a5c-bfcb-47af11705529-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.466801 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f03bc7c0-58f8-4a5c-bfcb-47af11705529-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.466815 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f03bc7c0-58f8-4a5c-bfcb-47af11705529-logs\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.625191 4725 generic.go:334] "Generic (PLEG): container finished" podID="f03bc7c0-58f8-4a5c-bfcb-47af11705529" containerID="b403442bad72241016c1d86071512fd069ba31d91aef0467f230fc611ced04d8" exitCode=0 Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.625274 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.625250 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f03bc7c0-58f8-4a5c-bfcb-47af11705529","Type":"ContainerDied","Data":"b403442bad72241016c1d86071512fd069ba31d91aef0467f230fc611ced04d8"} Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.625429 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f03bc7c0-58f8-4a5c-bfcb-47af11705529","Type":"ContainerDied","Data":"2043d1b8182473812597d191f9f133086e1c12ce068e6469e418967bbca2194b"} Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.625473 4725 scope.go:117] "RemoveContainer" containerID="b403442bad72241016c1d86071512fd069ba31d91aef0467f230fc611ced04d8" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.650082 4725 scope.go:117] "RemoveContainer" containerID="ef7d5d81e775ef6b51a79f5f3fcb8080959aa7e92a09626ed5a6abbccc42f6e2" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.682758 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.683161 4725 scope.go:117] "RemoveContainer" containerID="b403442bad72241016c1d86071512fd069ba31d91aef0467f230fc611ced04d8" Oct 14 13:35:10 crc kubenswrapper[4725]: E1014 13:35:10.683765 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b403442bad72241016c1d86071512fd069ba31d91aef0467f230fc611ced04d8\": container with ID starting with b403442bad72241016c1d86071512fd069ba31d91aef0467f230fc611ced04d8 not found: ID does not exist" containerID="b403442bad72241016c1d86071512fd069ba31d91aef0467f230fc611ced04d8" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.683856 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b403442bad72241016c1d86071512fd069ba31d91aef0467f230fc611ced04d8"} err="failed to get container status \"b403442bad72241016c1d86071512fd069ba31d91aef0467f230fc611ced04d8\": rpc error: code = NotFound desc = could not find container \"b403442bad72241016c1d86071512fd069ba31d91aef0467f230fc611ced04d8\": container with ID starting with b403442bad72241016c1d86071512fd069ba31d91aef0467f230fc611ced04d8 not found: ID does not exist" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.683911 4725 scope.go:117] "RemoveContainer" containerID="ef7d5d81e775ef6b51a79f5f3fcb8080959aa7e92a09626ed5a6abbccc42f6e2" Oct 14 13:35:10 crc kubenswrapper[4725]: E1014 13:35:10.684529 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef7d5d81e775ef6b51a79f5f3fcb8080959aa7e92a09626ed5a6abbccc42f6e2\": container with ID starting with ef7d5d81e775ef6b51a79f5f3fcb8080959aa7e92a09626ed5a6abbccc42f6e2 not found: ID does not exist" containerID="ef7d5d81e775ef6b51a79f5f3fcb8080959aa7e92a09626ed5a6abbccc42f6e2" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.684591 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef7d5d81e775ef6b51a79f5f3fcb8080959aa7e92a09626ed5a6abbccc42f6e2"} err="failed to get container status \"ef7d5d81e775ef6b51a79f5f3fcb8080959aa7e92a09626ed5a6abbccc42f6e2\": rpc error: code = NotFound desc = could not find container \"ef7d5d81e775ef6b51a79f5f3fcb8080959aa7e92a09626ed5a6abbccc42f6e2\": container with ID starting with ef7d5d81e775ef6b51a79f5f3fcb8080959aa7e92a09626ed5a6abbccc42f6e2 not found: ID does not exist" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.699131 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.705951 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 14 13:35:10 crc kubenswrapper[4725]: E1014 13:35:10.706532 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03bc7c0-58f8-4a5c-bfcb-47af11705529" containerName="nova-api-log" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.706606 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03bc7c0-58f8-4a5c-bfcb-47af11705529" containerName="nova-api-log" Oct 14 13:35:10 crc kubenswrapper[4725]: E1014 13:35:10.706681 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03bc7c0-58f8-4a5c-bfcb-47af11705529" containerName="nova-api-api" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.706736 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03bc7c0-58f8-4a5c-bfcb-47af11705529" containerName="nova-api-api" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.707052 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f03bc7c0-58f8-4a5c-bfcb-47af11705529" containerName="nova-api-api" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.707124 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f03bc7c0-58f8-4a5c-bfcb-47af11705529" containerName="nova-api-log" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.708238 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.712292 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.713315 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.717090 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.721059 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.873481 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ae2f566-a1e5-4bbd-8502-29665c052b35-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6ae2f566-a1e5-4bbd-8502-29665c052b35\") " pod="openstack/nova-api-0" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.874356 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ae2f566-a1e5-4bbd-8502-29665c052b35-logs\") pod \"nova-api-0\" (UID: \"6ae2f566-a1e5-4bbd-8502-29665c052b35\") " pod="openstack/nova-api-0" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.874496 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ae2f566-a1e5-4bbd-8502-29665c052b35-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6ae2f566-a1e5-4bbd-8502-29665c052b35\") " pod="openstack/nova-api-0" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.874834 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrsgr\" (UniqueName: \"kubernetes.io/projected/6ae2f566-a1e5-4bbd-8502-29665c052b35-kube-api-access-rrsgr\") pod \"nova-api-0\" (UID: \"6ae2f566-a1e5-4bbd-8502-29665c052b35\") " pod="openstack/nova-api-0" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.874964 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ae2f566-a1e5-4bbd-8502-29665c052b35-public-tls-certs\") pod \"nova-api-0\" (UID: \"6ae2f566-a1e5-4bbd-8502-29665c052b35\") " pod="openstack/nova-api-0" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.875078 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ae2f566-a1e5-4bbd-8502-29665c052b35-config-data\") pod \"nova-api-0\" (UID: \"6ae2f566-a1e5-4bbd-8502-29665c052b35\") " pod="openstack/nova-api-0" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.977847 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ae2f566-a1e5-4bbd-8502-29665c052b35-logs\") pod \"nova-api-0\" (UID: \"6ae2f566-a1e5-4bbd-8502-29665c052b35\") " pod="openstack/nova-api-0" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.977954 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ae2f566-a1e5-4bbd-8502-29665c052b35-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6ae2f566-a1e5-4bbd-8502-29665c052b35\") " pod="openstack/nova-api-0" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.978043 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrsgr\" (UniqueName: \"kubernetes.io/projected/6ae2f566-a1e5-4bbd-8502-29665c052b35-kube-api-access-rrsgr\") pod \"nova-api-0\" (UID: \"6ae2f566-a1e5-4bbd-8502-29665c052b35\") " pod="openstack/nova-api-0" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.978089 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ae2f566-a1e5-4bbd-8502-29665c052b35-public-tls-certs\") pod \"nova-api-0\" (UID: \"6ae2f566-a1e5-4bbd-8502-29665c052b35\") " pod="openstack/nova-api-0" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.978143 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ae2f566-a1e5-4bbd-8502-29665c052b35-config-data\") pod \"nova-api-0\" (UID: \"6ae2f566-a1e5-4bbd-8502-29665c052b35\") " pod="openstack/nova-api-0" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.978201 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ae2f566-a1e5-4bbd-8502-29665c052b35-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6ae2f566-a1e5-4bbd-8502-29665c052b35\") " pod="openstack/nova-api-0" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.979077 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ae2f566-a1e5-4bbd-8502-29665c052b35-logs\") pod \"nova-api-0\" (UID: \"6ae2f566-a1e5-4bbd-8502-29665c052b35\") " pod="openstack/nova-api-0" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.986523 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ae2f566-a1e5-4bbd-8502-29665c052b35-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6ae2f566-a1e5-4bbd-8502-29665c052b35\") " pod="openstack/nova-api-0" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.988249 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ae2f566-a1e5-4bbd-8502-29665c052b35-public-tls-certs\") pod \"nova-api-0\" (UID: \"6ae2f566-a1e5-4bbd-8502-29665c052b35\") " pod="openstack/nova-api-0" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.988443 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ae2f566-a1e5-4bbd-8502-29665c052b35-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6ae2f566-a1e5-4bbd-8502-29665c052b35\") " pod="openstack/nova-api-0" Oct 14 13:35:10 crc kubenswrapper[4725]: I1014 13:35:10.989067 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ae2f566-a1e5-4bbd-8502-29665c052b35-config-data\") pod \"nova-api-0\" (UID: \"6ae2f566-a1e5-4bbd-8502-29665c052b35\") " pod="openstack/nova-api-0" Oct 14 13:35:11 crc kubenswrapper[4725]: I1014 13:35:11.001866 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrsgr\" (UniqueName: \"kubernetes.io/projected/6ae2f566-a1e5-4bbd-8502-29665c052b35-kube-api-access-rrsgr\") pod \"nova-api-0\" (UID: \"6ae2f566-a1e5-4bbd-8502-29665c052b35\") " pod="openstack/nova-api-0" Oct 14 13:35:11 crc kubenswrapper[4725]: I1014 13:35:11.028558 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 13:35:11 crc kubenswrapper[4725]: I1014 13:35:11.483318 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 13:35:11 crc kubenswrapper[4725]: I1014 13:35:11.655540 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ae2f566-a1e5-4bbd-8502-29665c052b35","Type":"ContainerStarted","Data":"7430cc87cacdd521cf1a20f3d0632f98345ee6b87735b9f78146329317626a08"} Oct 14 13:35:11 crc kubenswrapper[4725]: I1014 13:35:11.655984 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ae2f566-a1e5-4bbd-8502-29665c052b35","Type":"ContainerStarted","Data":"35d4bd6947cdc02ce89facc58a300383108bb0bd7eaf680d4afbd2cae16aab41"} Oct 14 13:35:11 crc kubenswrapper[4725]: I1014 13:35:11.935978 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f03bc7c0-58f8-4a5c-bfcb-47af11705529" path="/var/lib/kubelet/pods/f03bc7c0-58f8-4a5c-bfcb-47af11705529/volumes" Oct 14 13:35:12 crc kubenswrapper[4725]: I1014 13:35:12.669994 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ae2f566-a1e5-4bbd-8502-29665c052b35","Type":"ContainerStarted","Data":"deb65aa37755f824f9ac2869f7ec115ec8e38919951ecc4cdaedb8fc8d6c6b5b"} Oct 14 13:35:12 crc kubenswrapper[4725]: I1014 13:35:12.693851 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.693834849 podStartE2EDuration="2.693834849s" podCreationTimestamp="2025-10-14 13:35:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:35:12.69204678 +0000 UTC m=+1229.540481599" watchObservedRunningTime="2025-10-14 13:35:12.693834849 +0000 UTC m=+1229.542269658" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.097854 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.126813 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32dec6da-c9a2-490c-8b50-14e95259066b-config-data\") pod \"32dec6da-c9a2-490c-8b50-14e95259066b\" (UID: \"32dec6da-c9a2-490c-8b50-14e95259066b\") " Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.126892 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32dec6da-c9a2-490c-8b50-14e95259066b-log-httpd\") pod \"32dec6da-c9a2-490c-8b50-14e95259066b\" (UID: \"32dec6da-c9a2-490c-8b50-14e95259066b\") " Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.126971 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32dec6da-c9a2-490c-8b50-14e95259066b-combined-ca-bundle\") pod \"32dec6da-c9a2-490c-8b50-14e95259066b\" (UID: \"32dec6da-c9a2-490c-8b50-14e95259066b\") " Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.127064 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k74vv\" (UniqueName: \"kubernetes.io/projected/32dec6da-c9a2-490c-8b50-14e95259066b-kube-api-access-k74vv\") pod \"32dec6da-c9a2-490c-8b50-14e95259066b\" (UID: \"32dec6da-c9a2-490c-8b50-14e95259066b\") " Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.127121 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32dec6da-c9a2-490c-8b50-14e95259066b-sg-core-conf-yaml\") pod \"32dec6da-c9a2-490c-8b50-14e95259066b\" (UID: \"32dec6da-c9a2-490c-8b50-14e95259066b\") " Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.127157 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32dec6da-c9a2-490c-8b50-14e95259066b-run-httpd\") pod \"32dec6da-c9a2-490c-8b50-14e95259066b\" (UID: \"32dec6da-c9a2-490c-8b50-14e95259066b\") " Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.127213 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32dec6da-c9a2-490c-8b50-14e95259066b-scripts\") pod \"32dec6da-c9a2-490c-8b50-14e95259066b\" (UID: \"32dec6da-c9a2-490c-8b50-14e95259066b\") " Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.127309 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/32dec6da-c9a2-490c-8b50-14e95259066b-ceilometer-tls-certs\") pod \"32dec6da-c9a2-490c-8b50-14e95259066b\" (UID: \"32dec6da-c9a2-490c-8b50-14e95259066b\") " Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.127507 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32dec6da-c9a2-490c-8b50-14e95259066b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "32dec6da-c9a2-490c-8b50-14e95259066b" (UID: "32dec6da-c9a2-490c-8b50-14e95259066b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.128191 4725 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32dec6da-c9a2-490c-8b50-14e95259066b-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.128789 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32dec6da-c9a2-490c-8b50-14e95259066b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "32dec6da-c9a2-490c-8b50-14e95259066b" (UID: "32dec6da-c9a2-490c-8b50-14e95259066b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.137709 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32dec6da-c9a2-490c-8b50-14e95259066b-kube-api-access-k74vv" (OuterVolumeSpecName: "kube-api-access-k74vv") pod "32dec6da-c9a2-490c-8b50-14e95259066b" (UID: "32dec6da-c9a2-490c-8b50-14e95259066b"). InnerVolumeSpecName "kube-api-access-k74vv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.158003 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32dec6da-c9a2-490c-8b50-14e95259066b-scripts" (OuterVolumeSpecName: "scripts") pod "32dec6da-c9a2-490c-8b50-14e95259066b" (UID: "32dec6da-c9a2-490c-8b50-14e95259066b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.183654 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32dec6da-c9a2-490c-8b50-14e95259066b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "32dec6da-c9a2-490c-8b50-14e95259066b" (UID: "32dec6da-c9a2-490c-8b50-14e95259066b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.196040 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32dec6da-c9a2-490c-8b50-14e95259066b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "32dec6da-c9a2-490c-8b50-14e95259066b" (UID: "32dec6da-c9a2-490c-8b50-14e95259066b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.232670 4725 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32dec6da-c9a2-490c-8b50-14e95259066b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.232706 4725 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32dec6da-c9a2-490c-8b50-14e95259066b-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.232715 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32dec6da-c9a2-490c-8b50-14e95259066b-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.232726 4725 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/32dec6da-c9a2-490c-8b50-14e95259066b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.232735 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k74vv\" (UniqueName: \"kubernetes.io/projected/32dec6da-c9a2-490c-8b50-14e95259066b-kube-api-access-k74vv\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.245132 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32dec6da-c9a2-490c-8b50-14e95259066b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32dec6da-c9a2-490c-8b50-14e95259066b" (UID: "32dec6da-c9a2-490c-8b50-14e95259066b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.247868 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32dec6da-c9a2-490c-8b50-14e95259066b-config-data" (OuterVolumeSpecName: "config-data") pod "32dec6da-c9a2-490c-8b50-14e95259066b" (UID: "32dec6da-c9a2-490c-8b50-14e95259066b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.334443 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32dec6da-c9a2-490c-8b50-14e95259066b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.334528 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32dec6da-c9a2-490c-8b50-14e95259066b-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.682316 4725 generic.go:334] "Generic (PLEG): container finished" podID="32dec6da-c9a2-490c-8b50-14e95259066b" containerID="05ff8112b4f19531b2b9e2569e78cef60207d08e5ea7372bec84911b6d14bbab" exitCode=0 Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.682403 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.682399 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32dec6da-c9a2-490c-8b50-14e95259066b","Type":"ContainerDied","Data":"05ff8112b4f19531b2b9e2569e78cef60207d08e5ea7372bec84911b6d14bbab"} Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.682731 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32dec6da-c9a2-490c-8b50-14e95259066b","Type":"ContainerDied","Data":"8529a8c4eb1961c7c4b1825316308676edc31cc003c00a1d4a7c28d3bd66ffd3"} Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.683082 4725 scope.go:117] "RemoveContainer" containerID="c86368d69a2defa52913b87b2284e72895e58adcf4c9a3c7481745237671412c" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.706492 4725 scope.go:117] "RemoveContainer" containerID="1ea02786c15b34ed405c1b92534d523e279490cd00c7199e4b8c7a9eaa2498d1" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.730678 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.749942 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.755877 4725 scope.go:117] "RemoveContainer" containerID="05ff8112b4f19531b2b9e2569e78cef60207d08e5ea7372bec84911b6d14bbab" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.768526 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 14 13:35:13 crc kubenswrapper[4725]: E1014 13:35:13.769202 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32dec6da-c9a2-490c-8b50-14e95259066b" containerName="sg-core" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.769227 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="32dec6da-c9a2-490c-8b50-14e95259066b" containerName="sg-core" Oct 14 13:35:13 crc kubenswrapper[4725]: E1014 13:35:13.769275 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32dec6da-c9a2-490c-8b50-14e95259066b" containerName="ceilometer-notification-agent" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.769285 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="32dec6da-c9a2-490c-8b50-14e95259066b" containerName="ceilometer-notification-agent" Oct 14 13:35:13 crc kubenswrapper[4725]: E1014 13:35:13.769300 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32dec6da-c9a2-490c-8b50-14e95259066b" containerName="proxy-httpd" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.769308 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="32dec6da-c9a2-490c-8b50-14e95259066b" containerName="proxy-httpd" Oct 14 13:35:13 crc kubenswrapper[4725]: E1014 13:35:13.769319 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32dec6da-c9a2-490c-8b50-14e95259066b" containerName="ceilometer-central-agent" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.769326 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="32dec6da-c9a2-490c-8b50-14e95259066b" containerName="ceilometer-central-agent" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.769570 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="32dec6da-c9a2-490c-8b50-14e95259066b" containerName="ceilometer-central-agent" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.769596 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="32dec6da-c9a2-490c-8b50-14e95259066b" containerName="proxy-httpd" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.769614 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="32dec6da-c9a2-490c-8b50-14e95259066b" containerName="sg-core" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.769637 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="32dec6da-c9a2-490c-8b50-14e95259066b" containerName="ceilometer-notification-agent" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.771825 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.775722 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.775925 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.776542 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.777004 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.780767 4725 scope.go:117] "RemoveContainer" containerID="919957d22e96b2058366f667070783ef3829c109c09e419921da3d105ead154f" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.815709 4725 scope.go:117] "RemoveContainer" containerID="c86368d69a2defa52913b87b2284e72895e58adcf4c9a3c7481745237671412c" Oct 14 13:35:13 crc kubenswrapper[4725]: E1014 13:35:13.816054 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c86368d69a2defa52913b87b2284e72895e58adcf4c9a3c7481745237671412c\": container with ID starting with c86368d69a2defa52913b87b2284e72895e58adcf4c9a3c7481745237671412c not found: ID does not exist" containerID="c86368d69a2defa52913b87b2284e72895e58adcf4c9a3c7481745237671412c" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.816090 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c86368d69a2defa52913b87b2284e72895e58adcf4c9a3c7481745237671412c"} err="failed to get container status \"c86368d69a2defa52913b87b2284e72895e58adcf4c9a3c7481745237671412c\": rpc error: code = NotFound desc = could not find container \"c86368d69a2defa52913b87b2284e72895e58adcf4c9a3c7481745237671412c\": container with ID starting with c86368d69a2defa52913b87b2284e72895e58adcf4c9a3c7481745237671412c not found: ID does not exist" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.816109 4725 scope.go:117] "RemoveContainer" containerID="1ea02786c15b34ed405c1b92534d523e279490cd00c7199e4b8c7a9eaa2498d1" Oct 14 13:35:13 crc kubenswrapper[4725]: E1014 13:35:13.816531 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ea02786c15b34ed405c1b92534d523e279490cd00c7199e4b8c7a9eaa2498d1\": container with ID starting with 1ea02786c15b34ed405c1b92534d523e279490cd00c7199e4b8c7a9eaa2498d1 not found: ID does not exist" containerID="1ea02786c15b34ed405c1b92534d523e279490cd00c7199e4b8c7a9eaa2498d1" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.816576 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ea02786c15b34ed405c1b92534d523e279490cd00c7199e4b8c7a9eaa2498d1"} err="failed to get container status \"1ea02786c15b34ed405c1b92534d523e279490cd00c7199e4b8c7a9eaa2498d1\": rpc error: code = NotFound desc = could not find container \"1ea02786c15b34ed405c1b92534d523e279490cd00c7199e4b8c7a9eaa2498d1\": container with ID starting with 1ea02786c15b34ed405c1b92534d523e279490cd00c7199e4b8c7a9eaa2498d1 not found: ID does not exist" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.816601 4725 scope.go:117] "RemoveContainer" containerID="05ff8112b4f19531b2b9e2569e78cef60207d08e5ea7372bec84911b6d14bbab" Oct 14 13:35:13 crc kubenswrapper[4725]: E1014 13:35:13.816895 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05ff8112b4f19531b2b9e2569e78cef60207d08e5ea7372bec84911b6d14bbab\": container with ID starting with 05ff8112b4f19531b2b9e2569e78cef60207d08e5ea7372bec84911b6d14bbab not found: ID does not exist" containerID="05ff8112b4f19531b2b9e2569e78cef60207d08e5ea7372bec84911b6d14bbab" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.816922 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05ff8112b4f19531b2b9e2569e78cef60207d08e5ea7372bec84911b6d14bbab"} err="failed to get container status \"05ff8112b4f19531b2b9e2569e78cef60207d08e5ea7372bec84911b6d14bbab\": rpc error: code = NotFound desc = could not find container \"05ff8112b4f19531b2b9e2569e78cef60207d08e5ea7372bec84911b6d14bbab\": container with ID starting with 05ff8112b4f19531b2b9e2569e78cef60207d08e5ea7372bec84911b6d14bbab not found: ID does not exist" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.816936 4725 scope.go:117] "RemoveContainer" containerID="919957d22e96b2058366f667070783ef3829c109c09e419921da3d105ead154f" Oct 14 13:35:13 crc kubenswrapper[4725]: E1014 13:35:13.817223 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"919957d22e96b2058366f667070783ef3829c109c09e419921da3d105ead154f\": container with ID starting with 919957d22e96b2058366f667070783ef3829c109c09e419921da3d105ead154f not found: ID does not exist" containerID="919957d22e96b2058366f667070783ef3829c109c09e419921da3d105ead154f" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.817254 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"919957d22e96b2058366f667070783ef3829c109c09e419921da3d105ead154f"} err="failed to get container status \"919957d22e96b2058366f667070783ef3829c109c09e419921da3d105ead154f\": rpc error: code = NotFound desc = could not find container \"919957d22e96b2058366f667070783ef3829c109c09e419921da3d105ead154f\": container with ID starting with 919957d22e96b2058366f667070783ef3829c109c09e419921da3d105ead154f not found: ID does not exist" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.845611 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/17b19201-3dd2-4e20-bc62-3727faed2947-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"17b19201-3dd2-4e20-bc62-3727faed2947\") " pod="openstack/ceilometer-0" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.845654 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/17b19201-3dd2-4e20-bc62-3727faed2947-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"17b19201-3dd2-4e20-bc62-3727faed2947\") " pod="openstack/ceilometer-0" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.845674 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5mdz\" (UniqueName: \"kubernetes.io/projected/17b19201-3dd2-4e20-bc62-3727faed2947-kube-api-access-g5mdz\") pod \"ceilometer-0\" (UID: \"17b19201-3dd2-4e20-bc62-3727faed2947\") " pod="openstack/ceilometer-0" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.845736 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17b19201-3dd2-4e20-bc62-3727faed2947-scripts\") pod \"ceilometer-0\" (UID: \"17b19201-3dd2-4e20-bc62-3727faed2947\") " pod="openstack/ceilometer-0" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.845774 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17b19201-3dd2-4e20-bc62-3727faed2947-config-data\") pod \"ceilometer-0\" (UID: \"17b19201-3dd2-4e20-bc62-3727faed2947\") " pod="openstack/ceilometer-0" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.845831 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17b19201-3dd2-4e20-bc62-3727faed2947-log-httpd\") pod \"ceilometer-0\" (UID: \"17b19201-3dd2-4e20-bc62-3727faed2947\") " pod="openstack/ceilometer-0" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.845870 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17b19201-3dd2-4e20-bc62-3727faed2947-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"17b19201-3dd2-4e20-bc62-3727faed2947\") " pod="openstack/ceilometer-0" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.846077 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17b19201-3dd2-4e20-bc62-3727faed2947-run-httpd\") pod \"ceilometer-0\" (UID: \"17b19201-3dd2-4e20-bc62-3727faed2947\") " pod="openstack/ceilometer-0" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.932617 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32dec6da-c9a2-490c-8b50-14e95259066b" path="/var/lib/kubelet/pods/32dec6da-c9a2-490c-8b50-14e95259066b/volumes" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.948477 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17b19201-3dd2-4e20-bc62-3727faed2947-log-httpd\") pod \"ceilometer-0\" (UID: \"17b19201-3dd2-4e20-bc62-3727faed2947\") " pod="openstack/ceilometer-0" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.948544 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17b19201-3dd2-4e20-bc62-3727faed2947-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"17b19201-3dd2-4e20-bc62-3727faed2947\") " pod="openstack/ceilometer-0" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.948609 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17b19201-3dd2-4e20-bc62-3727faed2947-run-httpd\") pod \"ceilometer-0\" (UID: \"17b19201-3dd2-4e20-bc62-3727faed2947\") " pod="openstack/ceilometer-0" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.948634 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/17b19201-3dd2-4e20-bc62-3727faed2947-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"17b19201-3dd2-4e20-bc62-3727faed2947\") " pod="openstack/ceilometer-0" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.948652 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/17b19201-3dd2-4e20-bc62-3727faed2947-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"17b19201-3dd2-4e20-bc62-3727faed2947\") " pod="openstack/ceilometer-0" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.948672 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5mdz\" (UniqueName: \"kubernetes.io/projected/17b19201-3dd2-4e20-bc62-3727faed2947-kube-api-access-g5mdz\") pod \"ceilometer-0\" (UID: \"17b19201-3dd2-4e20-bc62-3727faed2947\") " pod="openstack/ceilometer-0" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.948704 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17b19201-3dd2-4e20-bc62-3727faed2947-scripts\") pod \"ceilometer-0\" (UID: \"17b19201-3dd2-4e20-bc62-3727faed2947\") " pod="openstack/ceilometer-0" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.948738 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17b19201-3dd2-4e20-bc62-3727faed2947-config-data\") pod \"ceilometer-0\" (UID: \"17b19201-3dd2-4e20-bc62-3727faed2947\") " pod="openstack/ceilometer-0" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.948955 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17b19201-3dd2-4e20-bc62-3727faed2947-log-httpd\") pod \"ceilometer-0\" (UID: \"17b19201-3dd2-4e20-bc62-3727faed2947\") " pod="openstack/ceilometer-0" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.949537 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/17b19201-3dd2-4e20-bc62-3727faed2947-run-httpd\") pod \"ceilometer-0\" (UID: \"17b19201-3dd2-4e20-bc62-3727faed2947\") " pod="openstack/ceilometer-0" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.952630 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/17b19201-3dd2-4e20-bc62-3727faed2947-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"17b19201-3dd2-4e20-bc62-3727faed2947\") " pod="openstack/ceilometer-0" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.952777 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17b19201-3dd2-4e20-bc62-3727faed2947-config-data\") pod \"ceilometer-0\" (UID: \"17b19201-3dd2-4e20-bc62-3727faed2947\") " pod="openstack/ceilometer-0" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.952781 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/17b19201-3dd2-4e20-bc62-3727faed2947-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"17b19201-3dd2-4e20-bc62-3727faed2947\") " pod="openstack/ceilometer-0" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.953198 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17b19201-3dd2-4e20-bc62-3727faed2947-scripts\") pod \"ceilometer-0\" (UID: \"17b19201-3dd2-4e20-bc62-3727faed2947\") " pod="openstack/ceilometer-0" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.956248 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17b19201-3dd2-4e20-bc62-3727faed2947-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"17b19201-3dd2-4e20-bc62-3727faed2947\") " pod="openstack/ceilometer-0" Oct 14 13:35:13 crc kubenswrapper[4725]: I1014 13:35:13.969078 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5mdz\" (UniqueName: \"kubernetes.io/projected/17b19201-3dd2-4e20-bc62-3727faed2947-kube-api-access-g5mdz\") pod \"ceilometer-0\" (UID: \"17b19201-3dd2-4e20-bc62-3727faed2947\") " pod="openstack/ceilometer-0" Oct 14 13:35:14 crc kubenswrapper[4725]: I1014 13:35:14.103110 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 14 13:35:14 crc kubenswrapper[4725]: I1014 13:35:14.211209 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:35:14 crc kubenswrapper[4725]: I1014 13:35:14.236246 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:35:14 crc kubenswrapper[4725]: I1014 13:35:14.571835 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 14 13:35:14 crc kubenswrapper[4725]: I1014 13:35:14.627697 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59cf4bdb65-jllfk" Oct 14 13:35:14 crc kubenswrapper[4725]: I1014 13:35:14.722714 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17b19201-3dd2-4e20-bc62-3727faed2947","Type":"ContainerStarted","Data":"298ade28300d944c7e4c49461d3a6566ed783c5cca6179f8d220442230a81234"} Oct 14 13:35:14 crc kubenswrapper[4725]: I1014 13:35:14.724632 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-6ssb2"] Oct 14 13:35:14 crc kubenswrapper[4725]: I1014 13:35:14.724861 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-845d6d6f59-6ssb2" podUID="7c0e9f2f-675f-4836-9e5a-27529c6fecb4" containerName="dnsmasq-dns" containerID="cri-o://71eaadb72d625e86d643f5d7d463a401a40bffb9fb1340a991113530f500e740" gracePeriod=10 Oct 14 13:35:14 crc kubenswrapper[4725]: I1014 13:35:14.782642 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Oct 14 13:35:14 crc kubenswrapper[4725]: I1014 13:35:14.931404 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-tllk2"] Oct 14 13:35:14 crc kubenswrapper[4725]: I1014 13:35:14.932522 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tllk2" Oct 14 13:35:14 crc kubenswrapper[4725]: I1014 13:35:14.935310 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Oct 14 13:35:14 crc kubenswrapper[4725]: I1014 13:35:14.935321 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Oct 14 13:35:14 crc kubenswrapper[4725]: I1014 13:35:14.956294 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-tllk2"] Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.071147 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d44c6f67-5633-4639-a1e6-98a70a8c7a97-config-data\") pod \"nova-cell1-cell-mapping-tllk2\" (UID: \"d44c6f67-5633-4639-a1e6-98a70a8c7a97\") " pod="openstack/nova-cell1-cell-mapping-tllk2" Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.071570 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d44c6f67-5633-4639-a1e6-98a70a8c7a97-scripts\") pod \"nova-cell1-cell-mapping-tllk2\" (UID: \"d44c6f67-5633-4639-a1e6-98a70a8c7a97\") " pod="openstack/nova-cell1-cell-mapping-tllk2" Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.071629 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d44c6f67-5633-4639-a1e6-98a70a8c7a97-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-tllk2\" (UID: \"d44c6f67-5633-4639-a1e6-98a70a8c7a97\") " pod="openstack/nova-cell1-cell-mapping-tllk2" Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.071673 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtlx4\" (UniqueName: \"kubernetes.io/projected/d44c6f67-5633-4639-a1e6-98a70a8c7a97-kube-api-access-wtlx4\") pod \"nova-cell1-cell-mapping-tllk2\" (UID: \"d44c6f67-5633-4639-a1e6-98a70a8c7a97\") " pod="openstack/nova-cell1-cell-mapping-tllk2" Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.174020 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d44c6f67-5633-4639-a1e6-98a70a8c7a97-scripts\") pod \"nova-cell1-cell-mapping-tllk2\" (UID: \"d44c6f67-5633-4639-a1e6-98a70a8c7a97\") " pod="openstack/nova-cell1-cell-mapping-tllk2" Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.174122 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d44c6f67-5633-4639-a1e6-98a70a8c7a97-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-tllk2\" (UID: \"d44c6f67-5633-4639-a1e6-98a70a8c7a97\") " pod="openstack/nova-cell1-cell-mapping-tllk2" Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.174169 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtlx4\" (UniqueName: \"kubernetes.io/projected/d44c6f67-5633-4639-a1e6-98a70a8c7a97-kube-api-access-wtlx4\") pod \"nova-cell1-cell-mapping-tllk2\" (UID: \"d44c6f67-5633-4639-a1e6-98a70a8c7a97\") " pod="openstack/nova-cell1-cell-mapping-tllk2" Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.174193 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d44c6f67-5633-4639-a1e6-98a70a8c7a97-config-data\") pod \"nova-cell1-cell-mapping-tllk2\" (UID: \"d44c6f67-5633-4639-a1e6-98a70a8c7a97\") " pod="openstack/nova-cell1-cell-mapping-tllk2" Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.181167 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d44c6f67-5633-4639-a1e6-98a70a8c7a97-scripts\") pod \"nova-cell1-cell-mapping-tllk2\" (UID: \"d44c6f67-5633-4639-a1e6-98a70a8c7a97\") " pod="openstack/nova-cell1-cell-mapping-tllk2" Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.181672 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d44c6f67-5633-4639-a1e6-98a70a8c7a97-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-tllk2\" (UID: \"d44c6f67-5633-4639-a1e6-98a70a8c7a97\") " pod="openstack/nova-cell1-cell-mapping-tllk2" Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.187627 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d44c6f67-5633-4639-a1e6-98a70a8c7a97-config-data\") pod \"nova-cell1-cell-mapping-tllk2\" (UID: \"d44c6f67-5633-4639-a1e6-98a70a8c7a97\") " pod="openstack/nova-cell1-cell-mapping-tllk2" Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.195762 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtlx4\" (UniqueName: \"kubernetes.io/projected/d44c6f67-5633-4639-a1e6-98a70a8c7a97-kube-api-access-wtlx4\") pod \"nova-cell1-cell-mapping-tllk2\" (UID: \"d44c6f67-5633-4639-a1e6-98a70a8c7a97\") " pod="openstack/nova-cell1-cell-mapping-tllk2" Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.253634 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tllk2" Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.453314 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-6ssb2" Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.581968 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-dns-swift-storage-0\") pod \"7c0e9f2f-675f-4836-9e5a-27529c6fecb4\" (UID: \"7c0e9f2f-675f-4836-9e5a-27529c6fecb4\") " Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.582135 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-config\") pod \"7c0e9f2f-675f-4836-9e5a-27529c6fecb4\" (UID: \"7c0e9f2f-675f-4836-9e5a-27529c6fecb4\") " Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.582192 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-dns-svc\") pod \"7c0e9f2f-675f-4836-9e5a-27529c6fecb4\" (UID: \"7c0e9f2f-675f-4836-9e5a-27529c6fecb4\") " Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.582219 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvtfv\" (UniqueName: \"kubernetes.io/projected/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-kube-api-access-mvtfv\") pod \"7c0e9f2f-675f-4836-9e5a-27529c6fecb4\" (UID: \"7c0e9f2f-675f-4836-9e5a-27529c6fecb4\") " Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.582302 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-ovsdbserver-nb\") pod \"7c0e9f2f-675f-4836-9e5a-27529c6fecb4\" (UID: \"7c0e9f2f-675f-4836-9e5a-27529c6fecb4\") " Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.582366 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-ovsdbserver-sb\") pod \"7c0e9f2f-675f-4836-9e5a-27529c6fecb4\" (UID: \"7c0e9f2f-675f-4836-9e5a-27529c6fecb4\") " Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.594339 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-kube-api-access-mvtfv" (OuterVolumeSpecName: "kube-api-access-mvtfv") pod "7c0e9f2f-675f-4836-9e5a-27529c6fecb4" (UID: "7c0e9f2f-675f-4836-9e5a-27529c6fecb4"). InnerVolumeSpecName "kube-api-access-mvtfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.643778 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7c0e9f2f-675f-4836-9e5a-27529c6fecb4" (UID: "7c0e9f2f-675f-4836-9e5a-27529c6fecb4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.648082 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7c0e9f2f-675f-4836-9e5a-27529c6fecb4" (UID: "7c0e9f2f-675f-4836-9e5a-27529c6fecb4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.661985 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-config" (OuterVolumeSpecName: "config") pod "7c0e9f2f-675f-4836-9e5a-27529c6fecb4" (UID: "7c0e9f2f-675f-4836-9e5a-27529c6fecb4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.665215 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7c0e9f2f-675f-4836-9e5a-27529c6fecb4" (UID: "7c0e9f2f-675f-4836-9e5a-27529c6fecb4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.667999 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7c0e9f2f-675f-4836-9e5a-27529c6fecb4" (UID: "7c0e9f2f-675f-4836-9e5a-27529c6fecb4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.684900 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.684931 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.684941 4725 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.684952 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.684960 4725 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.684970 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvtfv\" (UniqueName: \"kubernetes.io/projected/7c0e9f2f-675f-4836-9e5a-27529c6fecb4-kube-api-access-mvtfv\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.756942 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17b19201-3dd2-4e20-bc62-3727faed2947","Type":"ContainerStarted","Data":"15c1d2b28ac006cded0648d65311d87600fcbed821517c434ed5573b5ace26a8"} Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.759253 4725 generic.go:334] "Generic (PLEG): container finished" podID="7c0e9f2f-675f-4836-9e5a-27529c6fecb4" containerID="71eaadb72d625e86d643f5d7d463a401a40bffb9fb1340a991113530f500e740" exitCode=0 Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.759316 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-6ssb2" Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.759327 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-6ssb2" event={"ID":"7c0e9f2f-675f-4836-9e5a-27529c6fecb4","Type":"ContainerDied","Data":"71eaadb72d625e86d643f5d7d463a401a40bffb9fb1340a991113530f500e740"} Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.759380 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-6ssb2" event={"ID":"7c0e9f2f-675f-4836-9e5a-27529c6fecb4","Type":"ContainerDied","Data":"f6faa51a7b77ac5de8e7ed4767fb2552111a4d72607d0fea8c42b5ee23c14a67"} Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.759401 4725 scope.go:117] "RemoveContainer" containerID="71eaadb72d625e86d643f5d7d463a401a40bffb9fb1340a991113530f500e740" Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.767849 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-tllk2"] Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.790214 4725 scope.go:117] "RemoveContainer" containerID="f0e4e5eeca0194c44326b5107f54bef2af41f026daed4d5cab747f61521eaef3" Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.798684 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-6ssb2"] Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.805878 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-6ssb2"] Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.831708 4725 scope.go:117] "RemoveContainer" containerID="71eaadb72d625e86d643f5d7d463a401a40bffb9fb1340a991113530f500e740" Oct 14 13:35:15 crc kubenswrapper[4725]: E1014 13:35:15.832091 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71eaadb72d625e86d643f5d7d463a401a40bffb9fb1340a991113530f500e740\": container with ID starting with 71eaadb72d625e86d643f5d7d463a401a40bffb9fb1340a991113530f500e740 not found: ID does not exist" containerID="71eaadb72d625e86d643f5d7d463a401a40bffb9fb1340a991113530f500e740" Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.832126 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71eaadb72d625e86d643f5d7d463a401a40bffb9fb1340a991113530f500e740"} err="failed to get container status \"71eaadb72d625e86d643f5d7d463a401a40bffb9fb1340a991113530f500e740\": rpc error: code = NotFound desc = could not find container \"71eaadb72d625e86d643f5d7d463a401a40bffb9fb1340a991113530f500e740\": container with ID starting with 71eaadb72d625e86d643f5d7d463a401a40bffb9fb1340a991113530f500e740 not found: ID does not exist" Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.832146 4725 scope.go:117] "RemoveContainer" containerID="f0e4e5eeca0194c44326b5107f54bef2af41f026daed4d5cab747f61521eaef3" Oct 14 13:35:15 crc kubenswrapper[4725]: E1014 13:35:15.832596 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0e4e5eeca0194c44326b5107f54bef2af41f026daed4d5cab747f61521eaef3\": container with ID starting with f0e4e5eeca0194c44326b5107f54bef2af41f026daed4d5cab747f61521eaef3 not found: ID does not exist" containerID="f0e4e5eeca0194c44326b5107f54bef2af41f026daed4d5cab747f61521eaef3" Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.832620 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0e4e5eeca0194c44326b5107f54bef2af41f026daed4d5cab747f61521eaef3"} err="failed to get container status \"f0e4e5eeca0194c44326b5107f54bef2af41f026daed4d5cab747f61521eaef3\": rpc error: code = NotFound desc = could not find container \"f0e4e5eeca0194c44326b5107f54bef2af41f026daed4d5cab747f61521eaef3\": container with ID starting with f0e4e5eeca0194c44326b5107f54bef2af41f026daed4d5cab747f61521eaef3 not found: ID does not exist" Oct 14 13:35:15 crc kubenswrapper[4725]: I1014 13:35:15.934509 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c0e9f2f-675f-4836-9e5a-27529c6fecb4" path="/var/lib/kubelet/pods/7c0e9f2f-675f-4836-9e5a-27529c6fecb4/volumes" Oct 14 13:35:16 crc kubenswrapper[4725]: I1014 13:35:16.769926 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tllk2" event={"ID":"d44c6f67-5633-4639-a1e6-98a70a8c7a97","Type":"ContainerStarted","Data":"ed51e96eabeb2b7aa18f0a343c1e41bb3720e77c94726c9a1917cf7efb679461"} Oct 14 13:35:16 crc kubenswrapper[4725]: I1014 13:35:16.770291 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tllk2" event={"ID":"d44c6f67-5633-4639-a1e6-98a70a8c7a97","Type":"ContainerStarted","Data":"827a4ed72791100af04624c22152850767fcc8dbf9a41d3591ac5124713c38ca"} Oct 14 13:35:16 crc kubenswrapper[4725]: I1014 13:35:16.772040 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17b19201-3dd2-4e20-bc62-3727faed2947","Type":"ContainerStarted","Data":"d18f04cc30820b5beb953f3950deed403bb65af280a6a400738730fdf9168b18"} Oct 14 13:35:16 crc kubenswrapper[4725]: I1014 13:35:16.790626 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-tllk2" podStartSLOduration=2.790607692 podStartE2EDuration="2.790607692s" podCreationTimestamp="2025-10-14 13:35:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:35:16.782485157 +0000 UTC m=+1233.630919966" watchObservedRunningTime="2025-10-14 13:35:16.790607692 +0000 UTC m=+1233.639042501" Oct 14 13:35:17 crc kubenswrapper[4725]: I1014 13:35:17.782104 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17b19201-3dd2-4e20-bc62-3727faed2947","Type":"ContainerStarted","Data":"4315320d4396ce55d7ae40c9d25c08865d66d77a6fa2ffb27e190f96c28be78d"} Oct 14 13:35:18 crc kubenswrapper[4725]: I1014 13:35:18.795706 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"17b19201-3dd2-4e20-bc62-3727faed2947","Type":"ContainerStarted","Data":"314f0fe2d5727987aee0e3de4b3acc0585692d7e129c2326d72181a83b674d48"} Oct 14 13:35:18 crc kubenswrapper[4725]: I1014 13:35:18.796427 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 14 13:35:18 crc kubenswrapper[4725]: I1014 13:35:18.816059 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.132028492 podStartE2EDuration="5.81603205s" podCreationTimestamp="2025-10-14 13:35:13 +0000 UTC" firstStartedPulling="2025-10-14 13:35:14.572194526 +0000 UTC m=+1231.420629335" lastFinishedPulling="2025-10-14 13:35:18.256198084 +0000 UTC m=+1235.104632893" observedRunningTime="2025-10-14 13:35:18.812597996 +0000 UTC m=+1235.661032845" watchObservedRunningTime="2025-10-14 13:35:18.81603205 +0000 UTC m=+1235.664466879" Oct 14 13:35:20 crc kubenswrapper[4725]: I1014 13:35:20.814192 4725 generic.go:334] "Generic (PLEG): container finished" podID="d44c6f67-5633-4639-a1e6-98a70a8c7a97" containerID="ed51e96eabeb2b7aa18f0a343c1e41bb3720e77c94726c9a1917cf7efb679461" exitCode=0 Oct 14 13:35:20 crc kubenswrapper[4725]: I1014 13:35:20.814282 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tllk2" event={"ID":"d44c6f67-5633-4639-a1e6-98a70a8c7a97","Type":"ContainerDied","Data":"ed51e96eabeb2b7aa18f0a343c1e41bb3720e77c94726c9a1917cf7efb679461"} Oct 14 13:35:21 crc kubenswrapper[4725]: I1014 13:35:21.030125 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 13:35:21 crc kubenswrapper[4725]: I1014 13:35:21.030173 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 13:35:22 crc kubenswrapper[4725]: I1014 13:35:22.046719 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6ae2f566-a1e5-4bbd-8502-29665c052b35" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.200:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 13:35:22 crc kubenswrapper[4725]: I1014 13:35:22.046709 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6ae2f566-a1e5-4bbd-8502-29665c052b35" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.200:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 13:35:22 crc kubenswrapper[4725]: I1014 13:35:22.203205 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tllk2" Oct 14 13:35:22 crc kubenswrapper[4725]: I1014 13:35:22.313817 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d44c6f67-5633-4639-a1e6-98a70a8c7a97-scripts\") pod \"d44c6f67-5633-4639-a1e6-98a70a8c7a97\" (UID: \"d44c6f67-5633-4639-a1e6-98a70a8c7a97\") " Oct 14 13:35:22 crc kubenswrapper[4725]: I1014 13:35:22.313920 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d44c6f67-5633-4639-a1e6-98a70a8c7a97-combined-ca-bundle\") pod \"d44c6f67-5633-4639-a1e6-98a70a8c7a97\" (UID: \"d44c6f67-5633-4639-a1e6-98a70a8c7a97\") " Oct 14 13:35:22 crc kubenswrapper[4725]: I1014 13:35:22.314050 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtlx4\" (UniqueName: \"kubernetes.io/projected/d44c6f67-5633-4639-a1e6-98a70a8c7a97-kube-api-access-wtlx4\") pod \"d44c6f67-5633-4639-a1e6-98a70a8c7a97\" (UID: \"d44c6f67-5633-4639-a1e6-98a70a8c7a97\") " Oct 14 13:35:22 crc kubenswrapper[4725]: I1014 13:35:22.314214 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d44c6f67-5633-4639-a1e6-98a70a8c7a97-config-data\") pod \"d44c6f67-5633-4639-a1e6-98a70a8c7a97\" (UID: \"d44c6f67-5633-4639-a1e6-98a70a8c7a97\") " Oct 14 13:35:22 crc kubenswrapper[4725]: I1014 13:35:22.320651 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d44c6f67-5633-4639-a1e6-98a70a8c7a97-kube-api-access-wtlx4" (OuterVolumeSpecName: "kube-api-access-wtlx4") pod "d44c6f67-5633-4639-a1e6-98a70a8c7a97" (UID: "d44c6f67-5633-4639-a1e6-98a70a8c7a97"). InnerVolumeSpecName "kube-api-access-wtlx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:35:22 crc kubenswrapper[4725]: I1014 13:35:22.324396 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d44c6f67-5633-4639-a1e6-98a70a8c7a97-scripts" (OuterVolumeSpecName: "scripts") pod "d44c6f67-5633-4639-a1e6-98a70a8c7a97" (UID: "d44c6f67-5633-4639-a1e6-98a70a8c7a97"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:35:22 crc kubenswrapper[4725]: I1014 13:35:22.343715 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d44c6f67-5633-4639-a1e6-98a70a8c7a97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d44c6f67-5633-4639-a1e6-98a70a8c7a97" (UID: "d44c6f67-5633-4639-a1e6-98a70a8c7a97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:35:22 crc kubenswrapper[4725]: I1014 13:35:22.351038 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d44c6f67-5633-4639-a1e6-98a70a8c7a97-config-data" (OuterVolumeSpecName: "config-data") pod "d44c6f67-5633-4639-a1e6-98a70a8c7a97" (UID: "d44c6f67-5633-4639-a1e6-98a70a8c7a97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:35:22 crc kubenswrapper[4725]: I1014 13:35:22.416864 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d44c6f67-5633-4639-a1e6-98a70a8c7a97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:22 crc kubenswrapper[4725]: I1014 13:35:22.417028 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtlx4\" (UniqueName: \"kubernetes.io/projected/d44c6f67-5633-4639-a1e6-98a70a8c7a97-kube-api-access-wtlx4\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:22 crc kubenswrapper[4725]: I1014 13:35:22.417053 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d44c6f67-5633-4639-a1e6-98a70a8c7a97-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:22 crc kubenswrapper[4725]: I1014 13:35:22.417073 4725 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d44c6f67-5633-4639-a1e6-98a70a8c7a97-scripts\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:22 crc kubenswrapper[4725]: I1014 13:35:22.837471 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tllk2" event={"ID":"d44c6f67-5633-4639-a1e6-98a70a8c7a97","Type":"ContainerDied","Data":"827a4ed72791100af04624c22152850767fcc8dbf9a41d3591ac5124713c38ca"} Oct 14 13:35:22 crc kubenswrapper[4725]: I1014 13:35:22.837795 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="827a4ed72791100af04624c22152850767fcc8dbf9a41d3591ac5124713c38ca" Oct 14 13:35:22 crc kubenswrapper[4725]: I1014 13:35:22.837590 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tllk2" Oct 14 13:35:23 crc kubenswrapper[4725]: I1014 13:35:23.052346 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 14 13:35:23 crc kubenswrapper[4725]: I1014 13:35:23.052706 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6ae2f566-a1e5-4bbd-8502-29665c052b35" containerName="nova-api-log" containerID="cri-o://7430cc87cacdd521cf1a20f3d0632f98345ee6b87735b9f78146329317626a08" gracePeriod=30 Oct 14 13:35:23 crc kubenswrapper[4725]: I1014 13:35:23.052819 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6ae2f566-a1e5-4bbd-8502-29665c052b35" containerName="nova-api-api" containerID="cri-o://deb65aa37755f824f9ac2869f7ec115ec8e38919951ecc4cdaedb8fc8d6c6b5b" gracePeriod=30 Oct 14 13:35:23 crc kubenswrapper[4725]: I1014 13:35:23.072685 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 13:35:23 crc kubenswrapper[4725]: I1014 13:35:23.072962 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="7243f030-3d65-4fc3-bf95-7a3df406ad4d" containerName="nova-scheduler-scheduler" containerID="cri-o://47544b09f5a1bc3f6f101fc538ff8f6631dd547fea09ae1357b6e94acbea89c8" gracePeriod=30 Oct 14 13:35:23 crc kubenswrapper[4725]: I1014 13:35:23.087072 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 13:35:23 crc kubenswrapper[4725]: I1014 13:35:23.087337 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a2f2aea4-7075-45b5-94c7-73d1b181beb8" containerName="nova-metadata-log" containerID="cri-o://4a56d9d6286fccae08cae0cd622f550cd44a5275731826ef484cbc061f89a5ca" gracePeriod=30 Oct 14 13:35:23 crc kubenswrapper[4725]: I1014 13:35:23.087742 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a2f2aea4-7075-45b5-94c7-73d1b181beb8" containerName="nova-metadata-metadata" containerID="cri-o://4c954ba509c4060eb493db7ab8acda5bcdc4598fd7350f9069bde8693aeb4956" gracePeriod=30 Oct 14 13:35:23 crc kubenswrapper[4725]: I1014 13:35:23.847303 4725 generic.go:334] "Generic (PLEG): container finished" podID="6ae2f566-a1e5-4bbd-8502-29665c052b35" containerID="7430cc87cacdd521cf1a20f3d0632f98345ee6b87735b9f78146329317626a08" exitCode=143 Oct 14 13:35:23 crc kubenswrapper[4725]: I1014 13:35:23.847355 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ae2f566-a1e5-4bbd-8502-29665c052b35","Type":"ContainerDied","Data":"7430cc87cacdd521cf1a20f3d0632f98345ee6b87735b9f78146329317626a08"} Oct 14 13:35:23 crc kubenswrapper[4725]: I1014 13:35:23.849788 4725 generic.go:334] "Generic (PLEG): container finished" podID="a2f2aea4-7075-45b5-94c7-73d1b181beb8" containerID="4a56d9d6286fccae08cae0cd622f550cd44a5275731826ef484cbc061f89a5ca" exitCode=143 Oct 14 13:35:23 crc kubenswrapper[4725]: I1014 13:35:23.849817 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a2f2aea4-7075-45b5-94c7-73d1b181beb8","Type":"ContainerDied","Data":"4a56d9d6286fccae08cae0cd622f550cd44a5275731826ef484cbc061f89a5ca"} Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.265768 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="a2f2aea4-7075-45b5-94c7-73d1b181beb8" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": read tcp 10.217.0.2:39662->10.217.0.194:8775: read: connection reset by peer" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.269893 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="a2f2aea4-7075-45b5-94c7-73d1b181beb8" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": read tcp 10.217.0.2:39666->10.217.0.194:8775: read: connection reset by peer" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.709279 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.805506 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f2aea4-7075-45b5-94c7-73d1b181beb8-combined-ca-bundle\") pod \"a2f2aea4-7075-45b5-94c7-73d1b181beb8\" (UID: \"a2f2aea4-7075-45b5-94c7-73d1b181beb8\") " Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.805703 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2f2aea4-7075-45b5-94c7-73d1b181beb8-logs\") pod \"a2f2aea4-7075-45b5-94c7-73d1b181beb8\" (UID: \"a2f2aea4-7075-45b5-94c7-73d1b181beb8\") " Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.805792 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfxcr\" (UniqueName: \"kubernetes.io/projected/a2f2aea4-7075-45b5-94c7-73d1b181beb8-kube-api-access-kfxcr\") pod \"a2f2aea4-7075-45b5-94c7-73d1b181beb8\" (UID: \"a2f2aea4-7075-45b5-94c7-73d1b181beb8\") " Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.805855 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2f2aea4-7075-45b5-94c7-73d1b181beb8-nova-metadata-tls-certs\") pod \"a2f2aea4-7075-45b5-94c7-73d1b181beb8\" (UID: \"a2f2aea4-7075-45b5-94c7-73d1b181beb8\") " Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.805903 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2f2aea4-7075-45b5-94c7-73d1b181beb8-config-data\") pod \"a2f2aea4-7075-45b5-94c7-73d1b181beb8\" (UID: \"a2f2aea4-7075-45b5-94c7-73d1b181beb8\") " Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.806231 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2f2aea4-7075-45b5-94c7-73d1b181beb8-logs" (OuterVolumeSpecName: "logs") pod "a2f2aea4-7075-45b5-94c7-73d1b181beb8" (UID: "a2f2aea4-7075-45b5-94c7-73d1b181beb8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.810701 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2f2aea4-7075-45b5-94c7-73d1b181beb8-kube-api-access-kfxcr" (OuterVolumeSpecName: "kube-api-access-kfxcr") pod "a2f2aea4-7075-45b5-94c7-73d1b181beb8" (UID: "a2f2aea4-7075-45b5-94c7-73d1b181beb8"). InnerVolumeSpecName "kube-api-access-kfxcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.832948 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2f2aea4-7075-45b5-94c7-73d1b181beb8-config-data" (OuterVolumeSpecName: "config-data") pod "a2f2aea4-7075-45b5-94c7-73d1b181beb8" (UID: "a2f2aea4-7075-45b5-94c7-73d1b181beb8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.842478 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2f2aea4-7075-45b5-94c7-73d1b181beb8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2f2aea4-7075-45b5-94c7-73d1b181beb8" (UID: "a2f2aea4-7075-45b5-94c7-73d1b181beb8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.857703 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.866685 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2f2aea4-7075-45b5-94c7-73d1b181beb8-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "a2f2aea4-7075-45b5-94c7-73d1b181beb8" (UID: "a2f2aea4-7075-45b5-94c7-73d1b181beb8"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.882775 4725 generic.go:334] "Generic (PLEG): container finished" podID="7243f030-3d65-4fc3-bf95-7a3df406ad4d" containerID="47544b09f5a1bc3f6f101fc538ff8f6631dd547fea09ae1357b6e94acbea89c8" exitCode=0 Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.882841 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7243f030-3d65-4fc3-bf95-7a3df406ad4d","Type":"ContainerDied","Data":"47544b09f5a1bc3f6f101fc538ff8f6631dd547fea09ae1357b6e94acbea89c8"} Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.882866 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7243f030-3d65-4fc3-bf95-7a3df406ad4d","Type":"ContainerDied","Data":"80c077fbeebc7301629c7173d18ca7e8ba1edba027c69da0e3a3792dcb4e1492"} Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.882882 4725 scope.go:117] "RemoveContainer" containerID="47544b09f5a1bc3f6f101fc538ff8f6631dd547fea09ae1357b6e94acbea89c8" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.883003 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.889469 4725 generic.go:334] "Generic (PLEG): container finished" podID="a2f2aea4-7075-45b5-94c7-73d1b181beb8" containerID="4c954ba509c4060eb493db7ab8acda5bcdc4598fd7350f9069bde8693aeb4956" exitCode=0 Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.889518 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a2f2aea4-7075-45b5-94c7-73d1b181beb8","Type":"ContainerDied","Data":"4c954ba509c4060eb493db7ab8acda5bcdc4598fd7350f9069bde8693aeb4956"} Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.889548 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a2f2aea4-7075-45b5-94c7-73d1b181beb8","Type":"ContainerDied","Data":"63fbff28b6fc7653665ca7546fd9a2e6d492fb461224604cd8a2746faf98fbab"} Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.889640 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.907857 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243f030-3d65-4fc3-bf95-7a3df406ad4d-combined-ca-bundle\") pod \"7243f030-3d65-4fc3-bf95-7a3df406ad4d\" (UID: \"7243f030-3d65-4fc3-bf95-7a3df406ad4d\") " Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.907952 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9k7m\" (UniqueName: \"kubernetes.io/projected/7243f030-3d65-4fc3-bf95-7a3df406ad4d-kube-api-access-r9k7m\") pod \"7243f030-3d65-4fc3-bf95-7a3df406ad4d\" (UID: \"7243f030-3d65-4fc3-bf95-7a3df406ad4d\") " Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.909072 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7243f030-3d65-4fc3-bf95-7a3df406ad4d-config-data\") pod \"7243f030-3d65-4fc3-bf95-7a3df406ad4d\" (UID: \"7243f030-3d65-4fc3-bf95-7a3df406ad4d\") " Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.912315 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2f2aea4-7075-45b5-94c7-73d1b181beb8-logs\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.912496 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfxcr\" (UniqueName: \"kubernetes.io/projected/a2f2aea4-7075-45b5-94c7-73d1b181beb8-kube-api-access-kfxcr\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.912553 4725 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2f2aea4-7075-45b5-94c7-73d1b181beb8-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.912566 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2f2aea4-7075-45b5-94c7-73d1b181beb8-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.912576 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f2aea4-7075-45b5-94c7-73d1b181beb8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.912941 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7243f030-3d65-4fc3-bf95-7a3df406ad4d-kube-api-access-r9k7m" (OuterVolumeSpecName: "kube-api-access-r9k7m") pod "7243f030-3d65-4fc3-bf95-7a3df406ad4d" (UID: "7243f030-3d65-4fc3-bf95-7a3df406ad4d"). InnerVolumeSpecName "kube-api-access-r9k7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.926634 4725 scope.go:117] "RemoveContainer" containerID="47544b09f5a1bc3f6f101fc538ff8f6631dd547fea09ae1357b6e94acbea89c8" Oct 14 13:35:26 crc kubenswrapper[4725]: E1014 13:35:26.927373 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47544b09f5a1bc3f6f101fc538ff8f6631dd547fea09ae1357b6e94acbea89c8\": container with ID starting with 47544b09f5a1bc3f6f101fc538ff8f6631dd547fea09ae1357b6e94acbea89c8 not found: ID does not exist" containerID="47544b09f5a1bc3f6f101fc538ff8f6631dd547fea09ae1357b6e94acbea89c8" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.927471 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47544b09f5a1bc3f6f101fc538ff8f6631dd547fea09ae1357b6e94acbea89c8"} err="failed to get container status \"47544b09f5a1bc3f6f101fc538ff8f6631dd547fea09ae1357b6e94acbea89c8\": rpc error: code = NotFound desc = could not find container \"47544b09f5a1bc3f6f101fc538ff8f6631dd547fea09ae1357b6e94acbea89c8\": container with ID starting with 47544b09f5a1bc3f6f101fc538ff8f6631dd547fea09ae1357b6e94acbea89c8 not found: ID does not exist" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.927499 4725 scope.go:117] "RemoveContainer" containerID="4c954ba509c4060eb493db7ab8acda5bcdc4598fd7350f9069bde8693aeb4956" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.930688 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.938583 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.941910 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7243f030-3d65-4fc3-bf95-7a3df406ad4d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7243f030-3d65-4fc3-bf95-7a3df406ad4d" (UID: "7243f030-3d65-4fc3-bf95-7a3df406ad4d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.953504 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 14 13:35:26 crc kubenswrapper[4725]: E1014 13:35:26.954015 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0e9f2f-675f-4836-9e5a-27529c6fecb4" containerName="init" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.954044 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0e9f2f-675f-4836-9e5a-27529c6fecb4" containerName="init" Oct 14 13:35:26 crc kubenswrapper[4725]: E1014 13:35:26.954059 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0e9f2f-675f-4836-9e5a-27529c6fecb4" containerName="dnsmasq-dns" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.954068 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0e9f2f-675f-4836-9e5a-27529c6fecb4" containerName="dnsmasq-dns" Oct 14 13:35:26 crc kubenswrapper[4725]: E1014 13:35:26.954096 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7243f030-3d65-4fc3-bf95-7a3df406ad4d" containerName="nova-scheduler-scheduler" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.954105 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="7243f030-3d65-4fc3-bf95-7a3df406ad4d" containerName="nova-scheduler-scheduler" Oct 14 13:35:26 crc kubenswrapper[4725]: E1014 13:35:26.954118 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d44c6f67-5633-4639-a1e6-98a70a8c7a97" containerName="nova-manage" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.954125 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d44c6f67-5633-4639-a1e6-98a70a8c7a97" containerName="nova-manage" Oct 14 13:35:26 crc kubenswrapper[4725]: E1014 13:35:26.954349 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2f2aea4-7075-45b5-94c7-73d1b181beb8" containerName="nova-metadata-log" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.954359 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2f2aea4-7075-45b5-94c7-73d1b181beb8" containerName="nova-metadata-log" Oct 14 13:35:26 crc kubenswrapper[4725]: E1014 13:35:26.954400 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2f2aea4-7075-45b5-94c7-73d1b181beb8" containerName="nova-metadata-metadata" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.954410 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2f2aea4-7075-45b5-94c7-73d1b181beb8" containerName="nova-metadata-metadata" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.954639 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="7243f030-3d65-4fc3-bf95-7a3df406ad4d" containerName="nova-scheduler-scheduler" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.954670 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c0e9f2f-675f-4836-9e5a-27529c6fecb4" containerName="dnsmasq-dns" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.954681 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2f2aea4-7075-45b5-94c7-73d1b181beb8" containerName="nova-metadata-metadata" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.954696 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2f2aea4-7075-45b5-94c7-73d1b181beb8" containerName="nova-metadata-log" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.954714 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="d44c6f67-5633-4639-a1e6-98a70a8c7a97" containerName="nova-manage" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.955747 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.957793 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.960241 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.961376 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.962653 4725 scope.go:117] "RemoveContainer" containerID="4a56d9d6286fccae08cae0cd622f550cd44a5275731826ef484cbc061f89a5ca" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.967625 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7243f030-3d65-4fc3-bf95-7a3df406ad4d-config-data" (OuterVolumeSpecName: "config-data") pod "7243f030-3d65-4fc3-bf95-7a3df406ad4d" (UID: "7243f030-3d65-4fc3-bf95-7a3df406ad4d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.988950 4725 scope.go:117] "RemoveContainer" containerID="4c954ba509c4060eb493db7ab8acda5bcdc4598fd7350f9069bde8693aeb4956" Oct 14 13:35:26 crc kubenswrapper[4725]: E1014 13:35:26.989506 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c954ba509c4060eb493db7ab8acda5bcdc4598fd7350f9069bde8693aeb4956\": container with ID starting with 4c954ba509c4060eb493db7ab8acda5bcdc4598fd7350f9069bde8693aeb4956 not found: ID does not exist" containerID="4c954ba509c4060eb493db7ab8acda5bcdc4598fd7350f9069bde8693aeb4956" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.989541 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c954ba509c4060eb493db7ab8acda5bcdc4598fd7350f9069bde8693aeb4956"} err="failed to get container status \"4c954ba509c4060eb493db7ab8acda5bcdc4598fd7350f9069bde8693aeb4956\": rpc error: code = NotFound desc = could not find container \"4c954ba509c4060eb493db7ab8acda5bcdc4598fd7350f9069bde8693aeb4956\": container with ID starting with 4c954ba509c4060eb493db7ab8acda5bcdc4598fd7350f9069bde8693aeb4956 not found: ID does not exist" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.989563 4725 scope.go:117] "RemoveContainer" containerID="4a56d9d6286fccae08cae0cd622f550cd44a5275731826ef484cbc061f89a5ca" Oct 14 13:35:26 crc kubenswrapper[4725]: E1014 13:35:26.990034 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a56d9d6286fccae08cae0cd622f550cd44a5275731826ef484cbc061f89a5ca\": container with ID starting with 4a56d9d6286fccae08cae0cd622f550cd44a5275731826ef484cbc061f89a5ca not found: ID does not exist" containerID="4a56d9d6286fccae08cae0cd622f550cd44a5275731826ef484cbc061f89a5ca" Oct 14 13:35:26 crc kubenswrapper[4725]: I1014 13:35:26.990057 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a56d9d6286fccae08cae0cd622f550cd44a5275731826ef484cbc061f89a5ca"} err="failed to get container status \"4a56d9d6286fccae08cae0cd622f550cd44a5275731826ef484cbc061f89a5ca\": rpc error: code = NotFound desc = could not find container \"4a56d9d6286fccae08cae0cd622f550cd44a5275731826ef484cbc061f89a5ca\": container with ID starting with 4a56d9d6286fccae08cae0cd622f550cd44a5275731826ef484cbc061f89a5ca not found: ID does not exist" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.014711 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2bmp\" (UniqueName: \"kubernetes.io/projected/42ae164c-62cd-48cd-a5cf-17ce40c2cc61-kube-api-access-w2bmp\") pod \"nova-metadata-0\" (UID: \"42ae164c-62cd-48cd-a5cf-17ce40c2cc61\") " pod="openstack/nova-metadata-0" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.014811 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ae164c-62cd-48cd-a5cf-17ce40c2cc61-config-data\") pod \"nova-metadata-0\" (UID: \"42ae164c-62cd-48cd-a5cf-17ce40c2cc61\") " pod="openstack/nova-metadata-0" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.014977 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/42ae164c-62cd-48cd-a5cf-17ce40c2cc61-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"42ae164c-62cd-48cd-a5cf-17ce40c2cc61\") " pod="openstack/nova-metadata-0" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.015048 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42ae164c-62cd-48cd-a5cf-17ce40c2cc61-logs\") pod \"nova-metadata-0\" (UID: \"42ae164c-62cd-48cd-a5cf-17ce40c2cc61\") " pod="openstack/nova-metadata-0" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.015067 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ae164c-62cd-48cd-a5cf-17ce40c2cc61-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"42ae164c-62cd-48cd-a5cf-17ce40c2cc61\") " pod="openstack/nova-metadata-0" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.015112 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243f030-3d65-4fc3-bf95-7a3df406ad4d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.015124 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9k7m\" (UniqueName: \"kubernetes.io/projected/7243f030-3d65-4fc3-bf95-7a3df406ad4d-kube-api-access-r9k7m\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.015141 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7243f030-3d65-4fc3-bf95-7a3df406ad4d-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.116595 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2bmp\" (UniqueName: \"kubernetes.io/projected/42ae164c-62cd-48cd-a5cf-17ce40c2cc61-kube-api-access-w2bmp\") pod \"nova-metadata-0\" (UID: \"42ae164c-62cd-48cd-a5cf-17ce40c2cc61\") " pod="openstack/nova-metadata-0" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.116978 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ae164c-62cd-48cd-a5cf-17ce40c2cc61-config-data\") pod \"nova-metadata-0\" (UID: \"42ae164c-62cd-48cd-a5cf-17ce40c2cc61\") " pod="openstack/nova-metadata-0" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.117157 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/42ae164c-62cd-48cd-a5cf-17ce40c2cc61-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"42ae164c-62cd-48cd-a5cf-17ce40c2cc61\") " pod="openstack/nova-metadata-0" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.117304 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42ae164c-62cd-48cd-a5cf-17ce40c2cc61-logs\") pod \"nova-metadata-0\" (UID: \"42ae164c-62cd-48cd-a5cf-17ce40c2cc61\") " pod="openstack/nova-metadata-0" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.117399 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ae164c-62cd-48cd-a5cf-17ce40c2cc61-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"42ae164c-62cd-48cd-a5cf-17ce40c2cc61\") " pod="openstack/nova-metadata-0" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.117673 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42ae164c-62cd-48cd-a5cf-17ce40c2cc61-logs\") pod \"nova-metadata-0\" (UID: \"42ae164c-62cd-48cd-a5cf-17ce40c2cc61\") " pod="openstack/nova-metadata-0" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.120867 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/42ae164c-62cd-48cd-a5cf-17ce40c2cc61-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"42ae164c-62cd-48cd-a5cf-17ce40c2cc61\") " pod="openstack/nova-metadata-0" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.121481 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ae164c-62cd-48cd-a5cf-17ce40c2cc61-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"42ae164c-62cd-48cd-a5cf-17ce40c2cc61\") " pod="openstack/nova-metadata-0" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.122757 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ae164c-62cd-48cd-a5cf-17ce40c2cc61-config-data\") pod \"nova-metadata-0\" (UID: \"42ae164c-62cd-48cd-a5cf-17ce40c2cc61\") " pod="openstack/nova-metadata-0" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.133978 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2bmp\" (UniqueName: \"kubernetes.io/projected/42ae164c-62cd-48cd-a5cf-17ce40c2cc61-kube-api-access-w2bmp\") pod \"nova-metadata-0\" (UID: \"42ae164c-62cd-48cd-a5cf-17ce40c2cc61\") " pod="openstack/nova-metadata-0" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.213170 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.227545 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.245634 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.247088 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.248709 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.255551 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.297965 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.321219 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/720e68c1-d52e-4606-a3ac-a331c8039890-config-data\") pod \"nova-scheduler-0\" (UID: \"720e68c1-d52e-4606-a3ac-a331c8039890\") " pod="openstack/nova-scheduler-0" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.321309 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7nfj\" (UniqueName: \"kubernetes.io/projected/720e68c1-d52e-4606-a3ac-a331c8039890-kube-api-access-r7nfj\") pod \"nova-scheduler-0\" (UID: \"720e68c1-d52e-4606-a3ac-a331c8039890\") " pod="openstack/nova-scheduler-0" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.321498 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/720e68c1-d52e-4606-a3ac-a331c8039890-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"720e68c1-d52e-4606-a3ac-a331c8039890\") " pod="openstack/nova-scheduler-0" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.423266 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/720e68c1-d52e-4606-a3ac-a331c8039890-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"720e68c1-d52e-4606-a3ac-a331c8039890\") " pod="openstack/nova-scheduler-0" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.423613 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/720e68c1-d52e-4606-a3ac-a331c8039890-config-data\") pod \"nova-scheduler-0\" (UID: \"720e68c1-d52e-4606-a3ac-a331c8039890\") " pod="openstack/nova-scheduler-0" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.423659 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7nfj\" (UniqueName: \"kubernetes.io/projected/720e68c1-d52e-4606-a3ac-a331c8039890-kube-api-access-r7nfj\") pod \"nova-scheduler-0\" (UID: \"720e68c1-d52e-4606-a3ac-a331c8039890\") " pod="openstack/nova-scheduler-0" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.430475 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/720e68c1-d52e-4606-a3ac-a331c8039890-config-data\") pod \"nova-scheduler-0\" (UID: \"720e68c1-d52e-4606-a3ac-a331c8039890\") " pod="openstack/nova-scheduler-0" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.432092 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/720e68c1-d52e-4606-a3ac-a331c8039890-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"720e68c1-d52e-4606-a3ac-a331c8039890\") " pod="openstack/nova-scheduler-0" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.448670 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7nfj\" (UniqueName: \"kubernetes.io/projected/720e68c1-d52e-4606-a3ac-a331c8039890-kube-api-access-r7nfj\") pod \"nova-scheduler-0\" (UID: \"720e68c1-d52e-4606-a3ac-a331c8039890\") " pod="openstack/nova-scheduler-0" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.568880 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.753888 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 14 13:35:27 crc kubenswrapper[4725]: W1014 13:35:27.759068 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42ae164c_62cd_48cd_a5cf_17ce40c2cc61.slice/crio-1915b94e458f14b9eea60f737897c4c99938be0d897c16f7720a30410a0f7f91 WatchSource:0}: Error finding container 1915b94e458f14b9eea60f737897c4c99938be0d897c16f7720a30410a0f7f91: Status 404 returned error can't find the container with id 1915b94e458f14b9eea60f737897c4c99938be0d897c16f7720a30410a0f7f91 Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.899269 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"42ae164c-62cd-48cd-a5cf-17ce40c2cc61","Type":"ContainerStarted","Data":"1915b94e458f14b9eea60f737897c4c99938be0d897c16f7720a30410a0f7f91"} Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.904291 4725 generic.go:334] "Generic (PLEG): container finished" podID="6ae2f566-a1e5-4bbd-8502-29665c052b35" containerID="deb65aa37755f824f9ac2869f7ec115ec8e38919951ecc4cdaedb8fc8d6c6b5b" exitCode=0 Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.904325 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ae2f566-a1e5-4bbd-8502-29665c052b35","Type":"ContainerDied","Data":"deb65aa37755f824f9ac2869f7ec115ec8e38919951ecc4cdaedb8fc8d6c6b5b"} Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.935856 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7243f030-3d65-4fc3-bf95-7a3df406ad4d" path="/var/lib/kubelet/pods/7243f030-3d65-4fc3-bf95-7a3df406ad4d/volumes" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.936521 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2f2aea4-7075-45b5-94c7-73d1b181beb8" path="/var/lib/kubelet/pods/a2f2aea4-7075-45b5-94c7-73d1b181beb8/volumes" Oct 14 13:35:27 crc kubenswrapper[4725]: I1014 13:35:27.941742 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 13:35:28 crc kubenswrapper[4725]: I1014 13:35:28.035988 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ae2f566-a1e5-4bbd-8502-29665c052b35-internal-tls-certs\") pod \"6ae2f566-a1e5-4bbd-8502-29665c052b35\" (UID: \"6ae2f566-a1e5-4bbd-8502-29665c052b35\") " Oct 14 13:35:28 crc kubenswrapper[4725]: I1014 13:35:28.036093 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrsgr\" (UniqueName: \"kubernetes.io/projected/6ae2f566-a1e5-4bbd-8502-29665c052b35-kube-api-access-rrsgr\") pod \"6ae2f566-a1e5-4bbd-8502-29665c052b35\" (UID: \"6ae2f566-a1e5-4bbd-8502-29665c052b35\") " Oct 14 13:35:28 crc kubenswrapper[4725]: I1014 13:35:28.036116 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ae2f566-a1e5-4bbd-8502-29665c052b35-combined-ca-bundle\") pod \"6ae2f566-a1e5-4bbd-8502-29665c052b35\" (UID: \"6ae2f566-a1e5-4bbd-8502-29665c052b35\") " Oct 14 13:35:28 crc kubenswrapper[4725]: I1014 13:35:28.036214 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ae2f566-a1e5-4bbd-8502-29665c052b35-config-data\") pod \"6ae2f566-a1e5-4bbd-8502-29665c052b35\" (UID: \"6ae2f566-a1e5-4bbd-8502-29665c052b35\") " Oct 14 13:35:28 crc kubenswrapper[4725]: I1014 13:35:28.036314 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ae2f566-a1e5-4bbd-8502-29665c052b35-logs\") pod \"6ae2f566-a1e5-4bbd-8502-29665c052b35\" (UID: \"6ae2f566-a1e5-4bbd-8502-29665c052b35\") " Oct 14 13:35:28 crc kubenswrapper[4725]: I1014 13:35:28.036627 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ae2f566-a1e5-4bbd-8502-29665c052b35-public-tls-certs\") pod \"6ae2f566-a1e5-4bbd-8502-29665c052b35\" (UID: \"6ae2f566-a1e5-4bbd-8502-29665c052b35\") " Oct 14 13:35:28 crc kubenswrapper[4725]: I1014 13:35:28.038596 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ae2f566-a1e5-4bbd-8502-29665c052b35-logs" (OuterVolumeSpecName: "logs") pod "6ae2f566-a1e5-4bbd-8502-29665c052b35" (UID: "6ae2f566-a1e5-4bbd-8502-29665c052b35"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:35:28 crc kubenswrapper[4725]: I1014 13:35:28.040607 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ae2f566-a1e5-4bbd-8502-29665c052b35-kube-api-access-rrsgr" (OuterVolumeSpecName: "kube-api-access-rrsgr") pod "6ae2f566-a1e5-4bbd-8502-29665c052b35" (UID: "6ae2f566-a1e5-4bbd-8502-29665c052b35"). InnerVolumeSpecName "kube-api-access-rrsgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:35:28 crc kubenswrapper[4725]: I1014 13:35:28.064357 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ae2f566-a1e5-4bbd-8502-29665c052b35-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ae2f566-a1e5-4bbd-8502-29665c052b35" (UID: "6ae2f566-a1e5-4bbd-8502-29665c052b35"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:35:28 crc kubenswrapper[4725]: I1014 13:35:28.070753 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ae2f566-a1e5-4bbd-8502-29665c052b35-config-data" (OuterVolumeSpecName: "config-data") pod "6ae2f566-a1e5-4bbd-8502-29665c052b35" (UID: "6ae2f566-a1e5-4bbd-8502-29665c052b35"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:35:28 crc kubenswrapper[4725]: I1014 13:35:28.098739 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ae2f566-a1e5-4bbd-8502-29665c052b35-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6ae2f566-a1e5-4bbd-8502-29665c052b35" (UID: "6ae2f566-a1e5-4bbd-8502-29665c052b35"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:35:28 crc kubenswrapper[4725]: I1014 13:35:28.109194 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ae2f566-a1e5-4bbd-8502-29665c052b35-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6ae2f566-a1e5-4bbd-8502-29665c052b35" (UID: "6ae2f566-a1e5-4bbd-8502-29665c052b35"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:35:28 crc kubenswrapper[4725]: I1014 13:35:28.130802 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 14 13:35:28 crc kubenswrapper[4725]: I1014 13:35:28.138823 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrsgr\" (UniqueName: \"kubernetes.io/projected/6ae2f566-a1e5-4bbd-8502-29665c052b35-kube-api-access-rrsgr\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:28 crc kubenswrapper[4725]: I1014 13:35:28.138856 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ae2f566-a1e5-4bbd-8502-29665c052b35-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:28 crc kubenswrapper[4725]: I1014 13:35:28.138868 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ae2f566-a1e5-4bbd-8502-29665c052b35-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:28 crc kubenswrapper[4725]: I1014 13:35:28.138880 4725 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ae2f566-a1e5-4bbd-8502-29665c052b35-logs\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:28 crc kubenswrapper[4725]: I1014 13:35:28.138889 4725 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ae2f566-a1e5-4bbd-8502-29665c052b35-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:28 crc kubenswrapper[4725]: I1014 13:35:28.138900 4725 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ae2f566-a1e5-4bbd-8502-29665c052b35-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 14 13:35:28 crc kubenswrapper[4725]: I1014 13:35:28.915345 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ae2f566-a1e5-4bbd-8502-29665c052b35","Type":"ContainerDied","Data":"35d4bd6947cdc02ce89facc58a300383108bb0bd7eaf680d4afbd2cae16aab41"} Oct 14 13:35:28 crc kubenswrapper[4725]: I1014 13:35:28.915411 4725 scope.go:117] "RemoveContainer" containerID="deb65aa37755f824f9ac2869f7ec115ec8e38919951ecc4cdaedb8fc8d6c6b5b" Oct 14 13:35:28 crc kubenswrapper[4725]: I1014 13:35:28.915439 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 13:35:28 crc kubenswrapper[4725]: I1014 13:35:28.918949 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"42ae164c-62cd-48cd-a5cf-17ce40c2cc61","Type":"ContainerStarted","Data":"32a3ae107c3019b0ea36f4670edfa67ac93315d82e6f60ed03a3d328a60bc143"} Oct 14 13:35:28 crc kubenswrapper[4725]: I1014 13:35:28.919007 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"42ae164c-62cd-48cd-a5cf-17ce40c2cc61","Type":"ContainerStarted","Data":"da390a47d622913a7c30cfc1479c741a349276b2bdabb1390d02e3716e548bac"} Oct 14 13:35:28 crc kubenswrapper[4725]: I1014 13:35:28.924137 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"720e68c1-d52e-4606-a3ac-a331c8039890","Type":"ContainerStarted","Data":"0c55c3900598436e97df57050f6ed3ec673e015bb84642f004029a0c09b39a49"} Oct 14 13:35:28 crc kubenswrapper[4725]: I1014 13:35:28.924371 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"720e68c1-d52e-4606-a3ac-a331c8039890","Type":"ContainerStarted","Data":"51b883e95464a8f9f2ed797fd52750f99d32ed17f1d1f771b6af8a010ec42746"} Oct 14 13:35:28 crc kubenswrapper[4725]: I1014 13:35:28.944072 4725 scope.go:117] "RemoveContainer" containerID="7430cc87cacdd521cf1a20f3d0632f98345ee6b87735b9f78146329317626a08" Oct 14 13:35:28 crc kubenswrapper[4725]: I1014 13:35:28.990197 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.9901756649999998 podStartE2EDuration="2.990175665s" podCreationTimestamp="2025-10-14 13:35:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:35:28.947158065 +0000 UTC m=+1245.795592874" watchObservedRunningTime="2025-10-14 13:35:28.990175665 +0000 UTC m=+1245.838610474" Oct 14 13:35:29 crc kubenswrapper[4725]: I1014 13:35:29.016103 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 14 13:35:29 crc kubenswrapper[4725]: I1014 13:35:29.023338 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 14 13:35:29 crc kubenswrapper[4725]: I1014 13:35:29.030592 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 14 13:35:29 crc kubenswrapper[4725]: E1014 13:35:29.031010 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ae2f566-a1e5-4bbd-8502-29665c052b35" containerName="nova-api-log" Oct 14 13:35:29 crc kubenswrapper[4725]: I1014 13:35:29.031022 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ae2f566-a1e5-4bbd-8502-29665c052b35" containerName="nova-api-log" Oct 14 13:35:29 crc kubenswrapper[4725]: E1014 13:35:29.031043 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ae2f566-a1e5-4bbd-8502-29665c052b35" containerName="nova-api-api" Oct 14 13:35:29 crc kubenswrapper[4725]: I1014 13:35:29.031049 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ae2f566-a1e5-4bbd-8502-29665c052b35" containerName="nova-api-api" Oct 14 13:35:29 crc kubenswrapper[4725]: I1014 13:35:29.031284 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ae2f566-a1e5-4bbd-8502-29665c052b35" containerName="nova-api-log" Oct 14 13:35:29 crc kubenswrapper[4725]: I1014 13:35:29.031305 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ae2f566-a1e5-4bbd-8502-29665c052b35" containerName="nova-api-api" Oct 14 13:35:29 crc kubenswrapper[4725]: I1014 13:35:29.033908 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.033889084 podStartE2EDuration="2.033889084s" podCreationTimestamp="2025-10-14 13:35:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:35:28.983643604 +0000 UTC m=+1245.832078413" watchObservedRunningTime="2025-10-14 13:35:29.033889084 +0000 UTC m=+1245.882323893" Oct 14 13:35:29 crc kubenswrapper[4725]: I1014 13:35:29.036585 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 13:35:29 crc kubenswrapper[4725]: I1014 13:35:29.040810 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 14 13:35:29 crc kubenswrapper[4725]: I1014 13:35:29.040822 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 14 13:35:29 crc kubenswrapper[4725]: I1014 13:35:29.041015 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 14 13:35:29 crc kubenswrapper[4725]: I1014 13:35:29.045638 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 13:35:29 crc kubenswrapper[4725]: I1014 13:35:29.154806 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/451fbb86-072b-41e1-8c6a-2433844f2e6e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"451fbb86-072b-41e1-8c6a-2433844f2e6e\") " pod="openstack/nova-api-0" Oct 14 13:35:29 crc kubenswrapper[4725]: I1014 13:35:29.155131 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/451fbb86-072b-41e1-8c6a-2433844f2e6e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"451fbb86-072b-41e1-8c6a-2433844f2e6e\") " pod="openstack/nova-api-0" Oct 14 13:35:29 crc kubenswrapper[4725]: I1014 13:35:29.155185 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pnt5\" (UniqueName: \"kubernetes.io/projected/451fbb86-072b-41e1-8c6a-2433844f2e6e-kube-api-access-2pnt5\") pod \"nova-api-0\" (UID: \"451fbb86-072b-41e1-8c6a-2433844f2e6e\") " pod="openstack/nova-api-0" Oct 14 13:35:29 crc kubenswrapper[4725]: I1014 13:35:29.155317 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/451fbb86-072b-41e1-8c6a-2433844f2e6e-config-data\") pod \"nova-api-0\" (UID: \"451fbb86-072b-41e1-8c6a-2433844f2e6e\") " pod="openstack/nova-api-0" Oct 14 13:35:29 crc kubenswrapper[4725]: I1014 13:35:29.155366 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/451fbb86-072b-41e1-8c6a-2433844f2e6e-public-tls-certs\") pod \"nova-api-0\" (UID: \"451fbb86-072b-41e1-8c6a-2433844f2e6e\") " pod="openstack/nova-api-0" Oct 14 13:35:29 crc kubenswrapper[4725]: I1014 13:35:29.155468 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/451fbb86-072b-41e1-8c6a-2433844f2e6e-logs\") pod \"nova-api-0\" (UID: \"451fbb86-072b-41e1-8c6a-2433844f2e6e\") " pod="openstack/nova-api-0" Oct 14 13:35:29 crc kubenswrapper[4725]: I1014 13:35:29.257299 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/451fbb86-072b-41e1-8c6a-2433844f2e6e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"451fbb86-072b-41e1-8c6a-2433844f2e6e\") " pod="openstack/nova-api-0" Oct 14 13:35:29 crc kubenswrapper[4725]: I1014 13:35:29.257373 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pnt5\" (UniqueName: \"kubernetes.io/projected/451fbb86-072b-41e1-8c6a-2433844f2e6e-kube-api-access-2pnt5\") pod \"nova-api-0\" (UID: \"451fbb86-072b-41e1-8c6a-2433844f2e6e\") " pod="openstack/nova-api-0" Oct 14 13:35:29 crc kubenswrapper[4725]: I1014 13:35:29.257407 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/451fbb86-072b-41e1-8c6a-2433844f2e6e-config-data\") pod \"nova-api-0\" (UID: \"451fbb86-072b-41e1-8c6a-2433844f2e6e\") " pod="openstack/nova-api-0" Oct 14 13:35:29 crc kubenswrapper[4725]: I1014 13:35:29.257459 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/451fbb86-072b-41e1-8c6a-2433844f2e6e-public-tls-certs\") pod \"nova-api-0\" (UID: \"451fbb86-072b-41e1-8c6a-2433844f2e6e\") " pod="openstack/nova-api-0" Oct 14 13:35:29 crc kubenswrapper[4725]: I1014 13:35:29.257504 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/451fbb86-072b-41e1-8c6a-2433844f2e6e-logs\") pod \"nova-api-0\" (UID: \"451fbb86-072b-41e1-8c6a-2433844f2e6e\") " pod="openstack/nova-api-0" Oct 14 13:35:29 crc kubenswrapper[4725]: I1014 13:35:29.257595 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/451fbb86-072b-41e1-8c6a-2433844f2e6e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"451fbb86-072b-41e1-8c6a-2433844f2e6e\") " pod="openstack/nova-api-0" Oct 14 13:35:29 crc kubenswrapper[4725]: I1014 13:35:29.258136 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/451fbb86-072b-41e1-8c6a-2433844f2e6e-logs\") pod \"nova-api-0\" (UID: \"451fbb86-072b-41e1-8c6a-2433844f2e6e\") " pod="openstack/nova-api-0" Oct 14 13:35:29 crc kubenswrapper[4725]: I1014 13:35:29.262465 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/451fbb86-072b-41e1-8c6a-2433844f2e6e-config-data\") pod \"nova-api-0\" (UID: \"451fbb86-072b-41e1-8c6a-2433844f2e6e\") " pod="openstack/nova-api-0" Oct 14 13:35:29 crc kubenswrapper[4725]: I1014 13:35:29.280052 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/451fbb86-072b-41e1-8c6a-2433844f2e6e-public-tls-certs\") pod \"nova-api-0\" (UID: \"451fbb86-072b-41e1-8c6a-2433844f2e6e\") " pod="openstack/nova-api-0" Oct 14 13:35:29 crc kubenswrapper[4725]: I1014 13:35:29.280758 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/451fbb86-072b-41e1-8c6a-2433844f2e6e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"451fbb86-072b-41e1-8c6a-2433844f2e6e\") " pod="openstack/nova-api-0" Oct 14 13:35:29 crc kubenswrapper[4725]: I1014 13:35:29.281986 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/451fbb86-072b-41e1-8c6a-2433844f2e6e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"451fbb86-072b-41e1-8c6a-2433844f2e6e\") " pod="openstack/nova-api-0" Oct 14 13:35:29 crc kubenswrapper[4725]: I1014 13:35:29.283128 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pnt5\" (UniqueName: \"kubernetes.io/projected/451fbb86-072b-41e1-8c6a-2433844f2e6e-kube-api-access-2pnt5\") pod \"nova-api-0\" (UID: \"451fbb86-072b-41e1-8c6a-2433844f2e6e\") " pod="openstack/nova-api-0" Oct 14 13:35:29 crc kubenswrapper[4725]: I1014 13:35:29.352874 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 14 13:35:30 crc kubenswrapper[4725]: I1014 13:35:29.807789 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 14 13:35:30 crc kubenswrapper[4725]: I1014 13:35:29.933972 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ae2f566-a1e5-4bbd-8502-29665c052b35" path="/var/lib/kubelet/pods/6ae2f566-a1e5-4bbd-8502-29665c052b35/volumes" Oct 14 13:35:30 crc kubenswrapper[4725]: I1014 13:35:29.946053 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"451fbb86-072b-41e1-8c6a-2433844f2e6e","Type":"ContainerStarted","Data":"f75b5586008c2ab56dd2b86faaaa598a1742de3cdd9ba2f596f805ee51ce95a6"} Oct 14 13:35:30 crc kubenswrapper[4725]: I1014 13:35:30.964289 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"451fbb86-072b-41e1-8c6a-2433844f2e6e","Type":"ContainerStarted","Data":"8f9213b8c69d5bc92686faa9805ae77bbce44799ef4427ece0a0d14035e7e541"} Oct 14 13:35:30 crc kubenswrapper[4725]: I1014 13:35:30.964833 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"451fbb86-072b-41e1-8c6a-2433844f2e6e","Type":"ContainerStarted","Data":"9452afc643888b005e4155ea4e7bd0795fc5e28561d94c8056461f1f987fd012"} Oct 14 13:35:32 crc kubenswrapper[4725]: I1014 13:35:32.299656 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 14 13:35:32 crc kubenswrapper[4725]: I1014 13:35:32.299744 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 14 13:35:32 crc kubenswrapper[4725]: I1014 13:35:32.569169 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 14 13:35:37 crc kubenswrapper[4725]: I1014 13:35:37.299141 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 14 13:35:37 crc kubenswrapper[4725]: I1014 13:35:37.299609 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 14 13:35:37 crc kubenswrapper[4725]: I1014 13:35:37.570005 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 14 13:35:37 crc kubenswrapper[4725]: I1014 13:35:37.597575 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 14 13:35:37 crc kubenswrapper[4725]: I1014 13:35:37.626266 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=9.626245661 podStartE2EDuration="9.626245661s" podCreationTimestamp="2025-10-14 13:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:35:30.997942085 +0000 UTC m=+1247.846376934" watchObservedRunningTime="2025-10-14 13:35:37.626245661 +0000 UTC m=+1254.474680480" Oct 14 13:35:38 crc kubenswrapper[4725]: I1014 13:35:38.087819 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 14 13:35:38 crc kubenswrapper[4725]: I1014 13:35:38.310670 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="42ae164c-62cd-48cd-a5cf-17ce40c2cc61" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 13:35:38 crc kubenswrapper[4725]: I1014 13:35:38.310701 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="42ae164c-62cd-48cd-a5cf-17ce40c2cc61" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 13:35:39 crc kubenswrapper[4725]: I1014 13:35:39.353771 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 13:35:39 crc kubenswrapper[4725]: I1014 13:35:39.354183 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 14 13:35:40 crc kubenswrapper[4725]: I1014 13:35:40.369695 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="451fbb86-072b-41e1-8c6a-2433844f2e6e" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 13:35:40 crc kubenswrapper[4725]: I1014 13:35:40.369791 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="451fbb86-072b-41e1-8c6a-2433844f2e6e" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 14 13:35:44 crc kubenswrapper[4725]: I1014 13:35:44.115213 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 14 13:35:47 crc kubenswrapper[4725]: I1014 13:35:47.304755 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 14 13:35:47 crc kubenswrapper[4725]: I1014 13:35:47.309801 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 14 13:35:47 crc kubenswrapper[4725]: I1014 13:35:47.314076 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 14 13:35:48 crc kubenswrapper[4725]: I1014 13:35:48.163255 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 14 13:35:49 crc kubenswrapper[4725]: I1014 13:35:49.361895 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 14 13:35:49 crc kubenswrapper[4725]: I1014 13:35:49.362342 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 14 13:35:49 crc kubenswrapper[4725]: I1014 13:35:49.366643 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 14 13:35:49 crc kubenswrapper[4725]: I1014 13:35:49.370296 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 14 13:35:50 crc kubenswrapper[4725]: I1014 13:35:50.173640 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 14 13:35:50 crc kubenswrapper[4725]: I1014 13:35:50.180669 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 14 13:35:58 crc kubenswrapper[4725]: I1014 13:35:58.415589 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 13:35:59 crc kubenswrapper[4725]: I1014 13:35:59.428732 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 13:36:02 crc kubenswrapper[4725]: I1014 13:36:02.277418 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="cc3270b7-85d3-4e11-a5e1-d9e42f8b876c" containerName="rabbitmq" containerID="cri-o://01a850d76feb2f99796249d8faf492b913822b629bba3a40210d1f02f75d0fff" gracePeriod=604797 Oct 14 13:36:03 crc kubenswrapper[4725]: I1014 13:36:03.426674 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="e690ed1d-b1fe-48b5-817c-d512cef45181" containerName="rabbitmq" containerID="cri-o://31b014df818ced151387b051e61349052027454c22b3d318be09ba0d92bf08c0" gracePeriod=604797 Oct 14 13:36:05 crc kubenswrapper[4725]: I1014 13:36:05.199875 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="cc3270b7-85d3-4e11-a5e1-d9e42f8b876c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Oct 14 13:36:05 crc kubenswrapper[4725]: I1014 13:36:05.286955 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="e690ed1d-b1fe-48b5-817c-d512cef45181" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Oct 14 13:36:08 crc kubenswrapper[4725]: I1014 13:36:08.900769 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.037030 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-rabbitmq-tls\") pod \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.037072 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-erlang-cookie-secret\") pod \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.037207 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-pod-info\") pod \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.037232 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-server-conf\") pod \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.037259 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-rabbitmq-erlang-cookie\") pod \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.037273 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-rabbitmq-plugins\") pod \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.037316 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-rabbitmq-confd\") pod \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.037336 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.037402 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8wp7\" (UniqueName: \"kubernetes.io/projected/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-kube-api-access-d8wp7\") pod \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.037439 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-config-data\") pod \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.037476 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-plugins-conf\") pod \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\" (UID: \"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c\") " Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.038438 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "cc3270b7-85d3-4e11-a5e1-d9e42f8b876c" (UID: "cc3270b7-85d3-4e11-a5e1-d9e42f8b876c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.038473 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "cc3270b7-85d3-4e11-a5e1-d9e42f8b876c" (UID: "cc3270b7-85d3-4e11-a5e1-d9e42f8b876c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.038430 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "cc3270b7-85d3-4e11-a5e1-d9e42f8b876c" (UID: "cc3270b7-85d3-4e11-a5e1-d9e42f8b876c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.044070 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "cc3270b7-85d3-4e11-a5e1-d9e42f8b876c" (UID: "cc3270b7-85d3-4e11-a5e1-d9e42f8b876c"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.044212 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "cc3270b7-85d3-4e11-a5e1-d9e42f8b876c" (UID: "cc3270b7-85d3-4e11-a5e1-d9e42f8b876c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.044526 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-pod-info" (OuterVolumeSpecName: "pod-info") pod "cc3270b7-85d3-4e11-a5e1-d9e42f8b876c" (UID: "cc3270b7-85d3-4e11-a5e1-d9e42f8b876c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.045068 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "cc3270b7-85d3-4e11-a5e1-d9e42f8b876c" (UID: "cc3270b7-85d3-4e11-a5e1-d9e42f8b876c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.045779 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-kube-api-access-d8wp7" (OuterVolumeSpecName: "kube-api-access-d8wp7") pod "cc3270b7-85d3-4e11-a5e1-d9e42f8b876c" (UID: "cc3270b7-85d3-4e11-a5e1-d9e42f8b876c"). InnerVolumeSpecName "kube-api-access-d8wp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.063273 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-config-data" (OuterVolumeSpecName: "config-data") pod "cc3270b7-85d3-4e11-a5e1-d9e42f8b876c" (UID: "cc3270b7-85d3-4e11-a5e1-d9e42f8b876c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.086007 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-server-conf" (OuterVolumeSpecName: "server-conf") pod "cc3270b7-85d3-4e11-a5e1-d9e42f8b876c" (UID: "cc3270b7-85d3-4e11-a5e1-d9e42f8b876c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.140411 4725 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-pod-info\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.140445 4725 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-server-conf\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.140476 4725 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.140487 4725 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.140520 4725 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.140534 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8wp7\" (UniqueName: \"kubernetes.io/projected/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-kube-api-access-d8wp7\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.140546 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.140560 4725 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.140570 4725 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.140581 4725 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.160236 4725 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.168139 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "cc3270b7-85d3-4e11-a5e1-d9e42f8b876c" (UID: "cc3270b7-85d3-4e11-a5e1-d9e42f8b876c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.242524 4725 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.242559 4725 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.376416 4725 generic.go:334] "Generic (PLEG): container finished" podID="cc3270b7-85d3-4e11-a5e1-d9e42f8b876c" containerID="01a850d76feb2f99796249d8faf492b913822b629bba3a40210d1f02f75d0fff" exitCode=0 Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.376494 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c","Type":"ContainerDied","Data":"01a850d76feb2f99796249d8faf492b913822b629bba3a40210d1f02f75d0fff"} Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.376548 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cc3270b7-85d3-4e11-a5e1-d9e42f8b876c","Type":"ContainerDied","Data":"7d73c1343f22fb6bff8d679a6d982d7b28ff6b5dbd0736eb6864fb97044b2d18"} Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.376568 4725 scope.go:117] "RemoveContainer" containerID="01a850d76feb2f99796249d8faf492b913822b629bba3a40210d1f02f75d0fff" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.376728 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.404217 4725 scope.go:117] "RemoveContainer" containerID="9af8ac336b7385e92f1f7700ffd3de85aee088e2f9349df4545a5e644720a771" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.418524 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.427740 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.462702 4725 scope.go:117] "RemoveContainer" containerID="01a850d76feb2f99796249d8faf492b913822b629bba3a40210d1f02f75d0fff" Oct 14 13:36:09 crc kubenswrapper[4725]: E1014 13:36:09.466207 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01a850d76feb2f99796249d8faf492b913822b629bba3a40210d1f02f75d0fff\": container with ID starting with 01a850d76feb2f99796249d8faf492b913822b629bba3a40210d1f02f75d0fff not found: ID does not exist" containerID="01a850d76feb2f99796249d8faf492b913822b629bba3a40210d1f02f75d0fff" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.466273 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01a850d76feb2f99796249d8faf492b913822b629bba3a40210d1f02f75d0fff"} err="failed to get container status \"01a850d76feb2f99796249d8faf492b913822b629bba3a40210d1f02f75d0fff\": rpc error: code = NotFound desc = could not find container \"01a850d76feb2f99796249d8faf492b913822b629bba3a40210d1f02f75d0fff\": container with ID starting with 01a850d76feb2f99796249d8faf492b913822b629bba3a40210d1f02f75d0fff not found: ID does not exist" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.466313 4725 scope.go:117] "RemoveContainer" containerID="9af8ac336b7385e92f1f7700ffd3de85aee088e2f9349df4545a5e644720a771" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.467057 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 13:36:09 crc kubenswrapper[4725]: E1014 13:36:09.467861 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc3270b7-85d3-4e11-a5e1-d9e42f8b876c" containerName="setup-container" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.467993 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc3270b7-85d3-4e11-a5e1-d9e42f8b876c" containerName="setup-container" Oct 14 13:36:09 crc kubenswrapper[4725]: E1014 13:36:09.468066 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc3270b7-85d3-4e11-a5e1-d9e42f8b876c" containerName="rabbitmq" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.468128 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc3270b7-85d3-4e11-a5e1-d9e42f8b876c" containerName="rabbitmq" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.468491 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc3270b7-85d3-4e11-a5e1-d9e42f8b876c" containerName="rabbitmq" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.472515 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: E1014 13:36:09.474801 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9af8ac336b7385e92f1f7700ffd3de85aee088e2f9349df4545a5e644720a771\": container with ID starting with 9af8ac336b7385e92f1f7700ffd3de85aee088e2f9349df4545a5e644720a771 not found: ID does not exist" containerID="9af8ac336b7385e92f1f7700ffd3de85aee088e2f9349df4545a5e644720a771" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.474850 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9af8ac336b7385e92f1f7700ffd3de85aee088e2f9349df4545a5e644720a771"} err="failed to get container status \"9af8ac336b7385e92f1f7700ffd3de85aee088e2f9349df4545a5e644720a771\": rpc error: code = NotFound desc = could not find container \"9af8ac336b7385e92f1f7700ffd3de85aee088e2f9349df4545a5e644720a771\": container with ID starting with 9af8ac336b7385e92f1f7700ffd3de85aee088e2f9349df4545a5e644720a771 not found: ID does not exist" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.475662 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.475894 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.476611 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-d59t8" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.476700 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.477010 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.477243 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.478748 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.495802 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.653895 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cbf19e88-e140-4357-8255-fdc507d7db52-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") " pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.653931 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cbf19e88-e140-4357-8255-fdc507d7db52-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") " pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.653956 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cbf19e88-e140-4357-8255-fdc507d7db52-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") " pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.654195 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cbf19e88-e140-4357-8255-fdc507d7db52-config-data\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") " pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.654293 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cbf19e88-e140-4357-8255-fdc507d7db52-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") " pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.654418 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") " pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.654522 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cbf19e88-e140-4357-8255-fdc507d7db52-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") " pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.654571 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cbf19e88-e140-4357-8255-fdc507d7db52-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") " pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.654625 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cbf19e88-e140-4357-8255-fdc507d7db52-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") " pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.654650 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86zx6\" (UniqueName: \"kubernetes.io/projected/cbf19e88-e140-4357-8255-fdc507d7db52-kube-api-access-86zx6\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") " pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.654682 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cbf19e88-e140-4357-8255-fdc507d7db52-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") " pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.756666 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cbf19e88-e140-4357-8255-fdc507d7db52-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") " pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.756707 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cbf19e88-e140-4357-8255-fdc507d7db52-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") " pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.756730 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cbf19e88-e140-4357-8255-fdc507d7db52-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") " pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.756788 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cbf19e88-e140-4357-8255-fdc507d7db52-config-data\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") " pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.756814 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cbf19e88-e140-4357-8255-fdc507d7db52-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") " pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.756853 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") " pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.756886 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cbf19e88-e140-4357-8255-fdc507d7db52-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") " pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.756912 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cbf19e88-e140-4357-8255-fdc507d7db52-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") " pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.756938 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cbf19e88-e140-4357-8255-fdc507d7db52-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") " pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.756952 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86zx6\" (UniqueName: \"kubernetes.io/projected/cbf19e88-e140-4357-8255-fdc507d7db52-kube-api-access-86zx6\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") " pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.756968 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cbf19e88-e140-4357-8255-fdc507d7db52-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") " pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.758024 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cbf19e88-e140-4357-8255-fdc507d7db52-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") " pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.758516 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cbf19e88-e140-4357-8255-fdc507d7db52-config-data\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") " pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.759191 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cbf19e88-e140-4357-8255-fdc507d7db52-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") " pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.759223 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cbf19e88-e140-4357-8255-fdc507d7db52-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") " pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.759650 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cbf19e88-e140-4357-8255-fdc507d7db52-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") " pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.759952 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.768070 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cbf19e88-e140-4357-8255-fdc507d7db52-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") " pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.773110 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cbf19e88-e140-4357-8255-fdc507d7db52-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") " pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.782791 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cbf19e88-e140-4357-8255-fdc507d7db52-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") " pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.787113 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86zx6\" (UniqueName: \"kubernetes.io/projected/cbf19e88-e140-4357-8255-fdc507d7db52-kube-api-access-86zx6\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") " pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.801826 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") " pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.808205 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cbf19e88-e140-4357-8255-fdc507d7db52-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cbf19e88-e140-4357-8255-fdc507d7db52\") " pod="openstack/rabbitmq-server-0" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.938673 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc3270b7-85d3-4e11-a5e1-d9e42f8b876c" path="/var/lib/kubelet/pods/cc3270b7-85d3-4e11-a5e1-d9e42f8b876c/volumes" Oct 14 13:36:09 crc kubenswrapper[4725]: I1014 13:36:09.944357 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.083741 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e690ed1d-b1fe-48b5-817c-d512cef45181-rabbitmq-erlang-cookie\") pod \"e690ed1d-b1fe-48b5-817c-d512cef45181\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.083801 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e690ed1d-b1fe-48b5-817c-d512cef45181-erlang-cookie-secret\") pod \"e690ed1d-b1fe-48b5-817c-d512cef45181\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.083875 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e690ed1d-b1fe-48b5-817c-d512cef45181-rabbitmq-tls\") pod \"e690ed1d-b1fe-48b5-817c-d512cef45181\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.083898 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e690ed1d-b1fe-48b5-817c-d512cef45181-rabbitmq-plugins\") pod \"e690ed1d-b1fe-48b5-817c-d512cef45181\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.083958 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e690ed1d-b1fe-48b5-817c-d512cef45181-pod-info\") pod \"e690ed1d-b1fe-48b5-817c-d512cef45181\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.084239 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7885\" (UniqueName: \"kubernetes.io/projected/e690ed1d-b1fe-48b5-817c-d512cef45181-kube-api-access-l7885\") pod \"e690ed1d-b1fe-48b5-817c-d512cef45181\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.084270 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e690ed1d-b1fe-48b5-817c-d512cef45181-plugins-conf\") pod \"e690ed1d-b1fe-48b5-817c-d512cef45181\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.084325 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"e690ed1d-b1fe-48b5-817c-d512cef45181\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.084420 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e690ed1d-b1fe-48b5-817c-d512cef45181-rabbitmq-confd\") pod \"e690ed1d-b1fe-48b5-817c-d512cef45181\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.084522 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e690ed1d-b1fe-48b5-817c-d512cef45181-server-conf\") pod \"e690ed1d-b1fe-48b5-817c-d512cef45181\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.084566 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e690ed1d-b1fe-48b5-817c-d512cef45181-config-data\") pod \"e690ed1d-b1fe-48b5-817c-d512cef45181\" (UID: \"e690ed1d-b1fe-48b5-817c-d512cef45181\") " Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.084605 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e690ed1d-b1fe-48b5-817c-d512cef45181-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e690ed1d-b1fe-48b5-817c-d512cef45181" (UID: "e690ed1d-b1fe-48b5-817c-d512cef45181"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.085026 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e690ed1d-b1fe-48b5-817c-d512cef45181-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e690ed1d-b1fe-48b5-817c-d512cef45181" (UID: "e690ed1d-b1fe-48b5-817c-d512cef45181"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.085153 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e690ed1d-b1fe-48b5-817c-d512cef45181-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e690ed1d-b1fe-48b5-817c-d512cef45181" (UID: "e690ed1d-b1fe-48b5-817c-d512cef45181"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.085957 4725 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e690ed1d-b1fe-48b5-817c-d512cef45181-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.085992 4725 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e690ed1d-b1fe-48b5-817c-d512cef45181-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.086006 4725 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e690ed1d-b1fe-48b5-817c-d512cef45181-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.092321 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e690ed1d-b1fe-48b5-817c-d512cef45181-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e690ed1d-b1fe-48b5-817c-d512cef45181" (UID: "e690ed1d-b1fe-48b5-817c-d512cef45181"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.092377 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e690ed1d-b1fe-48b5-817c-d512cef45181-pod-info" (OuterVolumeSpecName: "pod-info") pod "e690ed1d-b1fe-48b5-817c-d512cef45181" (UID: "e690ed1d-b1fe-48b5-817c-d512cef45181"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.092688 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e690ed1d-b1fe-48b5-817c-d512cef45181-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e690ed1d-b1fe-48b5-817c-d512cef45181" (UID: "e690ed1d-b1fe-48b5-817c-d512cef45181"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.093897 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "e690ed1d-b1fe-48b5-817c-d512cef45181" (UID: "e690ed1d-b1fe-48b5-817c-d512cef45181"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.094788 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e690ed1d-b1fe-48b5-817c-d512cef45181-kube-api-access-l7885" (OuterVolumeSpecName: "kube-api-access-l7885") pod "e690ed1d-b1fe-48b5-817c-d512cef45181" (UID: "e690ed1d-b1fe-48b5-817c-d512cef45181"). InnerVolumeSpecName "kube-api-access-l7885". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.100585 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.134094 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e690ed1d-b1fe-48b5-817c-d512cef45181-config-data" (OuterVolumeSpecName: "config-data") pod "e690ed1d-b1fe-48b5-817c-d512cef45181" (UID: "e690ed1d-b1fe-48b5-817c-d512cef45181"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.146995 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-6544m"] Oct 14 13:36:10 crc kubenswrapper[4725]: E1014 13:36:10.147462 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e690ed1d-b1fe-48b5-817c-d512cef45181" containerName="setup-container" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.147479 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="e690ed1d-b1fe-48b5-817c-d512cef45181" containerName="setup-container" Oct 14 13:36:10 crc kubenswrapper[4725]: E1014 13:36:10.147517 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e690ed1d-b1fe-48b5-817c-d512cef45181" containerName="rabbitmq" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.147530 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="e690ed1d-b1fe-48b5-817c-d512cef45181" containerName="rabbitmq" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.147744 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="e690ed1d-b1fe-48b5-817c-d512cef45181" containerName="rabbitmq" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.148714 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-6544m" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.154795 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.162300 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-6544m"] Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.186059 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e690ed1d-b1fe-48b5-817c-d512cef45181-server-conf" (OuterVolumeSpecName: "server-conf") pod "e690ed1d-b1fe-48b5-817c-d512cef45181" (UID: "e690ed1d-b1fe-48b5-817c-d512cef45181"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.189423 4725 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e690ed1d-b1fe-48b5-817c-d512cef45181-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.189478 4725 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e690ed1d-b1fe-48b5-817c-d512cef45181-pod-info\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.189488 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7885\" (UniqueName: \"kubernetes.io/projected/e690ed1d-b1fe-48b5-817c-d512cef45181-kube-api-access-l7885\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.189527 4725 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.189553 4725 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e690ed1d-b1fe-48b5-817c-d512cef45181-server-conf\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.189563 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e690ed1d-b1fe-48b5-817c-d512cef45181-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.189572 4725 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e690ed1d-b1fe-48b5-817c-d512cef45181-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.226220 4725 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.267596 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e690ed1d-b1fe-48b5-817c-d512cef45181-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e690ed1d-b1fe-48b5-817c-d512cef45181" (UID: "e690ed1d-b1fe-48b5-817c-d512cef45181"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.290559 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfrhr\" (UniqueName: \"kubernetes.io/projected/212b7499-1ace-4744-be66-62bf9f3eb853-kube-api-access-pfrhr\") pod \"dnsmasq-dns-67b789f86c-6544m\" (UID: \"212b7499-1ace-4744-be66-62bf9f3eb853\") " pod="openstack/dnsmasq-dns-67b789f86c-6544m" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.290604 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-6544m\" (UID: \"212b7499-1ace-4744-be66-62bf9f3eb853\") " pod="openstack/dnsmasq-dns-67b789f86c-6544m" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.290665 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-6544m\" (UID: \"212b7499-1ace-4744-be66-62bf9f3eb853\") " pod="openstack/dnsmasq-dns-67b789f86c-6544m" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.290781 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-config\") pod \"dnsmasq-dns-67b789f86c-6544m\" (UID: \"212b7499-1ace-4744-be66-62bf9f3eb853\") " pod="openstack/dnsmasq-dns-67b789f86c-6544m" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.290838 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-dns-svc\") pod \"dnsmasq-dns-67b789f86c-6544m\" (UID: \"212b7499-1ace-4744-be66-62bf9f3eb853\") " pod="openstack/dnsmasq-dns-67b789f86c-6544m" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.290881 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-6544m\" (UID: \"212b7499-1ace-4744-be66-62bf9f3eb853\") " pod="openstack/dnsmasq-dns-67b789f86c-6544m" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.290905 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-6544m\" (UID: \"212b7499-1ace-4744-be66-62bf9f3eb853\") " pod="openstack/dnsmasq-dns-67b789f86c-6544m" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.290947 4725 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.290958 4725 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e690ed1d-b1fe-48b5-817c-d512cef45181-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.392911 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-dns-svc\") pod \"dnsmasq-dns-67b789f86c-6544m\" (UID: \"212b7499-1ace-4744-be66-62bf9f3eb853\") " pod="openstack/dnsmasq-dns-67b789f86c-6544m" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.392993 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-6544m\" (UID: \"212b7499-1ace-4744-be66-62bf9f3eb853\") " pod="openstack/dnsmasq-dns-67b789f86c-6544m" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.393030 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-6544m\" (UID: \"212b7499-1ace-4744-be66-62bf9f3eb853\") " pod="openstack/dnsmasq-dns-67b789f86c-6544m" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.393113 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfrhr\" (UniqueName: \"kubernetes.io/projected/212b7499-1ace-4744-be66-62bf9f3eb853-kube-api-access-pfrhr\") pod \"dnsmasq-dns-67b789f86c-6544m\" (UID: \"212b7499-1ace-4744-be66-62bf9f3eb853\") " pod="openstack/dnsmasq-dns-67b789f86c-6544m" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.393138 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-6544m\" (UID: \"212b7499-1ace-4744-be66-62bf9f3eb853\") " pod="openstack/dnsmasq-dns-67b789f86c-6544m" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.393211 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-6544m\" (UID: \"212b7499-1ace-4744-be66-62bf9f3eb853\") " pod="openstack/dnsmasq-dns-67b789f86c-6544m" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.393258 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-config\") pod \"dnsmasq-dns-67b789f86c-6544m\" (UID: \"212b7499-1ace-4744-be66-62bf9f3eb853\") " pod="openstack/dnsmasq-dns-67b789f86c-6544m" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.394026 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-dns-svc\") pod \"dnsmasq-dns-67b789f86c-6544m\" (UID: \"212b7499-1ace-4744-be66-62bf9f3eb853\") " pod="openstack/dnsmasq-dns-67b789f86c-6544m" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.394070 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-6544m\" (UID: \"212b7499-1ace-4744-be66-62bf9f3eb853\") " pod="openstack/dnsmasq-dns-67b789f86c-6544m" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.394383 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-config\") pod \"dnsmasq-dns-67b789f86c-6544m\" (UID: \"212b7499-1ace-4744-be66-62bf9f3eb853\") " pod="openstack/dnsmasq-dns-67b789f86c-6544m" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.394655 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-6544m\" (UID: \"212b7499-1ace-4744-be66-62bf9f3eb853\") " pod="openstack/dnsmasq-dns-67b789f86c-6544m" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.395010 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-6544m\" (UID: \"212b7499-1ace-4744-be66-62bf9f3eb853\") " pod="openstack/dnsmasq-dns-67b789f86c-6544m" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.395075 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-6544m\" (UID: \"212b7499-1ace-4744-be66-62bf9f3eb853\") " pod="openstack/dnsmasq-dns-67b789f86c-6544m" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.396310 4725 generic.go:334] "Generic (PLEG): container finished" podID="e690ed1d-b1fe-48b5-817c-d512cef45181" containerID="31b014df818ced151387b051e61349052027454c22b3d318be09ba0d92bf08c0" exitCode=0 Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.396378 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.396421 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e690ed1d-b1fe-48b5-817c-d512cef45181","Type":"ContainerDied","Data":"31b014df818ced151387b051e61349052027454c22b3d318be09ba0d92bf08c0"} Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.396478 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e690ed1d-b1fe-48b5-817c-d512cef45181","Type":"ContainerDied","Data":"cdaac290a0bdf527579940451a43a2bc97038b579e5fcecbb84f97c28b05643b"} Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.396567 4725 scope.go:117] "RemoveContainer" containerID="31b014df818ced151387b051e61349052027454c22b3d318be09ba0d92bf08c0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.417828 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfrhr\" (UniqueName: \"kubernetes.io/projected/212b7499-1ace-4744-be66-62bf9f3eb853-kube-api-access-pfrhr\") pod \"dnsmasq-dns-67b789f86c-6544m\" (UID: \"212b7499-1ace-4744-be66-62bf9f3eb853\") " pod="openstack/dnsmasq-dns-67b789f86c-6544m" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.431862 4725 scope.go:117] "RemoveContainer" containerID="469d6c9dd8280483d8bdc01c4219e2d1f77837452b0087234379692a73aa3bd2" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.436483 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.445525 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.458520 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.461854 4725 scope.go:117] "RemoveContainer" containerID="31b014df818ced151387b051e61349052027454c22b3d318be09ba0d92bf08c0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.462334 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: E1014 13:36:10.463308 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31b014df818ced151387b051e61349052027454c22b3d318be09ba0d92bf08c0\": container with ID starting with 31b014df818ced151387b051e61349052027454c22b3d318be09ba0d92bf08c0 not found: ID does not exist" containerID="31b014df818ced151387b051e61349052027454c22b3d318be09ba0d92bf08c0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.463350 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31b014df818ced151387b051e61349052027454c22b3d318be09ba0d92bf08c0"} err="failed to get container status \"31b014df818ced151387b051e61349052027454c22b3d318be09ba0d92bf08c0\": rpc error: code = NotFound desc = could not find container \"31b014df818ced151387b051e61349052027454c22b3d318be09ba0d92bf08c0\": container with ID starting with 31b014df818ced151387b051e61349052027454c22b3d318be09ba0d92bf08c0 not found: ID does not exist" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.463377 4725 scope.go:117] "RemoveContainer" containerID="469d6c9dd8280483d8bdc01c4219e2d1f77837452b0087234379692a73aa3bd2" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.465429 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.467995 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.468334 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.468497 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-zgkdc" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.469857 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.470128 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.472851 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.473801 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 13:36:10 crc kubenswrapper[4725]: E1014 13:36:10.475360 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"469d6c9dd8280483d8bdc01c4219e2d1f77837452b0087234379692a73aa3bd2\": container with ID starting with 469d6c9dd8280483d8bdc01c4219e2d1f77837452b0087234379692a73aa3bd2 not found: ID does not exist" containerID="469d6c9dd8280483d8bdc01c4219e2d1f77837452b0087234379692a73aa3bd2" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.475393 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"469d6c9dd8280483d8bdc01c4219e2d1f77837452b0087234379692a73aa3bd2"} err="failed to get container status \"469d6c9dd8280483d8bdc01c4219e2d1f77837452b0087234379692a73aa3bd2\": rpc error: code = NotFound desc = could not find container \"469d6c9dd8280483d8bdc01c4219e2d1f77837452b0087234379692a73aa3bd2\": container with ID starting with 469d6c9dd8280483d8bdc01c4219e2d1f77837452b0087234379692a73aa3bd2 not found: ID does not exist" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.486561 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-6544m" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.597216 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.597257 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.597286 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.597415 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.597526 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.597561 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.597583 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-722ct\" (UniqueName: \"kubernetes.io/projected/0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e-kube-api-access-722ct\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.597622 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.597690 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.597730 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.597759 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.602778 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.703599 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.703658 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.703682 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-722ct\" (UniqueName: \"kubernetes.io/projected/0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e-kube-api-access-722ct\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.703719 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.703788 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.703834 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.703865 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.703925 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.703950 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.703980 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.704017 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.706548 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.706890 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.707845 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.708281 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.708610 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.708697 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.709637 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.711894 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.712052 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.712304 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.726059 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-722ct\" (UniqueName: \"kubernetes.io/projected/0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e-kube-api-access-722ct\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.752117 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e\") " pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.797387 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:10 crc kubenswrapper[4725]: I1014 13:36:10.927073 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-6544m"] Oct 14 13:36:11 crc kubenswrapper[4725]: I1014 13:36:11.236515 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 14 13:36:11 crc kubenswrapper[4725]: W1014 13:36:11.241830 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fa2f8ba_c687_4c04_a9aa_b24fe6579d5e.slice/crio-159f6597e270debae82d7bf1ab7bd403f194cf885da9264e1bdef23601c7922e WatchSource:0}: Error finding container 159f6597e270debae82d7bf1ab7bd403f194cf885da9264e1bdef23601c7922e: Status 404 returned error can't find the container with id 159f6597e270debae82d7bf1ab7bd403f194cf885da9264e1bdef23601c7922e Oct 14 13:36:11 crc kubenswrapper[4725]: I1014 13:36:11.415088 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cbf19e88-e140-4357-8255-fdc507d7db52","Type":"ContainerStarted","Data":"d9784b18f4498b9decae7cc1748eb669c8cfae94c5925abb2c64b81d78a00b88"} Oct 14 13:36:11 crc kubenswrapper[4725]: I1014 13:36:11.417311 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e","Type":"ContainerStarted","Data":"159f6597e270debae82d7bf1ab7bd403f194cf885da9264e1bdef23601c7922e"} Oct 14 13:36:11 crc kubenswrapper[4725]: I1014 13:36:11.419393 4725 generic.go:334] "Generic (PLEG): container finished" podID="212b7499-1ace-4744-be66-62bf9f3eb853" containerID="447f63500b582adf0dff406506a5ea587f6cf1d387e65566f711aa3f133a8fef" exitCode=0 Oct 14 13:36:11 crc kubenswrapper[4725]: I1014 13:36:11.419423 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-6544m" event={"ID":"212b7499-1ace-4744-be66-62bf9f3eb853","Type":"ContainerDied","Data":"447f63500b582adf0dff406506a5ea587f6cf1d387e65566f711aa3f133a8fef"} Oct 14 13:36:11 crc kubenswrapper[4725]: I1014 13:36:11.419440 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-6544m" event={"ID":"212b7499-1ace-4744-be66-62bf9f3eb853","Type":"ContainerStarted","Data":"3b5ab5a23dea270498d166eda05b18411267f2be767bf29c4ba277c7cbda1cf6"} Oct 14 13:36:11 crc kubenswrapper[4725]: I1014 13:36:11.933489 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e690ed1d-b1fe-48b5-817c-d512cef45181" path="/var/lib/kubelet/pods/e690ed1d-b1fe-48b5-817c-d512cef45181/volumes" Oct 14 13:36:12 crc kubenswrapper[4725]: I1014 13:36:12.436043 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-6544m" event={"ID":"212b7499-1ace-4744-be66-62bf9f3eb853","Type":"ContainerStarted","Data":"515c655ff933dfe41eb866c3d6dc8a22fa5376c8c321bb1876b3fc6b36dca74b"} Oct 14 13:36:12 crc kubenswrapper[4725]: I1014 13:36:12.436570 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67b789f86c-6544m" Oct 14 13:36:12 crc kubenswrapper[4725]: I1014 13:36:12.472338 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67b789f86c-6544m" podStartSLOduration=2.472318828 podStartE2EDuration="2.472318828s" podCreationTimestamp="2025-10-14 13:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:36:12.46809419 +0000 UTC m=+1289.316528999" watchObservedRunningTime="2025-10-14 13:36:12.472318828 +0000 UTC m=+1289.320753637" Oct 14 13:36:13 crc kubenswrapper[4725]: I1014 13:36:13.450636 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e","Type":"ContainerStarted","Data":"44fedbcd66c5bd6d2b047fccf601458c240d8d51dd194fe2591c8c6f783933c2"} Oct 14 13:36:13 crc kubenswrapper[4725]: I1014 13:36:13.454369 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cbf19e88-e140-4357-8255-fdc507d7db52","Type":"ContainerStarted","Data":"504389b9ae87243cfe73e13f7040f03726da475cf8b1689da9974bd741677a65"} Oct 14 13:36:20 crc kubenswrapper[4725]: I1014 13:36:20.488633 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67b789f86c-6544m" Oct 14 13:36:20 crc kubenswrapper[4725]: I1014 13:36:20.589326 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-jllfk"] Oct 14 13:36:20 crc kubenswrapper[4725]: I1014 13:36:20.589752 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59cf4bdb65-jllfk" podUID="dfe26ed5-d004-4505-8bfc-f72e530f121e" containerName="dnsmasq-dns" containerID="cri-o://6f170435a0c4ba96604923f83dc198318b3d4834f0b531dfc900c739a7c2877a" gracePeriod=10 Oct 14 13:36:20 crc kubenswrapper[4725]: I1014 13:36:20.775274 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-vngt8"] Oct 14 13:36:20 crc kubenswrapper[4725]: I1014 13:36:20.777046 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb6ffcf87-vngt8" Oct 14 13:36:20 crc kubenswrapper[4725]: I1014 13:36:20.786037 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-vngt8"] Oct 14 13:36:20 crc kubenswrapper[4725]: I1014 13:36:20.912787 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b1be84e-cdc5-433a-a684-b9938901a03a-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-vngt8\" (UID: \"8b1be84e-cdc5-433a-a684-b9938901a03a\") " pod="openstack/dnsmasq-dns-cb6ffcf87-vngt8" Oct 14 13:36:20 crc kubenswrapper[4725]: I1014 13:36:20.912844 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8b1be84e-cdc5-433a-a684-b9938901a03a-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-vngt8\" (UID: \"8b1be84e-cdc5-433a-a684-b9938901a03a\") " pod="openstack/dnsmasq-dns-cb6ffcf87-vngt8" Oct 14 13:36:20 crc kubenswrapper[4725]: I1014 13:36:20.912881 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b1be84e-cdc5-433a-a684-b9938901a03a-config\") pod \"dnsmasq-dns-cb6ffcf87-vngt8\" (UID: \"8b1be84e-cdc5-433a-a684-b9938901a03a\") " pod="openstack/dnsmasq-dns-cb6ffcf87-vngt8" Oct 14 13:36:20 crc kubenswrapper[4725]: I1014 13:36:20.912899 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b1be84e-cdc5-433a-a684-b9938901a03a-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-vngt8\" (UID: \"8b1be84e-cdc5-433a-a684-b9938901a03a\") " pod="openstack/dnsmasq-dns-cb6ffcf87-vngt8" Oct 14 13:36:20 crc kubenswrapper[4725]: I1014 13:36:20.912953 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b1be84e-cdc5-433a-a684-b9938901a03a-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-vngt8\" (UID: \"8b1be84e-cdc5-433a-a684-b9938901a03a\") " pod="openstack/dnsmasq-dns-cb6ffcf87-vngt8" Oct 14 13:36:20 crc kubenswrapper[4725]: I1014 13:36:20.912985 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c7kx\" (UniqueName: \"kubernetes.io/projected/8b1be84e-cdc5-433a-a684-b9938901a03a-kube-api-access-9c7kx\") pod \"dnsmasq-dns-cb6ffcf87-vngt8\" (UID: \"8b1be84e-cdc5-433a-a684-b9938901a03a\") " pod="openstack/dnsmasq-dns-cb6ffcf87-vngt8" Oct 14 13:36:20 crc kubenswrapper[4725]: I1014 13:36:20.913013 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b1be84e-cdc5-433a-a684-b9938901a03a-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-vngt8\" (UID: \"8b1be84e-cdc5-433a-a684-b9938901a03a\") " pod="openstack/dnsmasq-dns-cb6ffcf87-vngt8" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.014739 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c7kx\" (UniqueName: \"kubernetes.io/projected/8b1be84e-cdc5-433a-a684-b9938901a03a-kube-api-access-9c7kx\") pod \"dnsmasq-dns-cb6ffcf87-vngt8\" (UID: \"8b1be84e-cdc5-433a-a684-b9938901a03a\") " pod="openstack/dnsmasq-dns-cb6ffcf87-vngt8" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.014988 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b1be84e-cdc5-433a-a684-b9938901a03a-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-vngt8\" (UID: \"8b1be84e-cdc5-433a-a684-b9938901a03a\") " pod="openstack/dnsmasq-dns-cb6ffcf87-vngt8" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.015112 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b1be84e-cdc5-433a-a684-b9938901a03a-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-vngt8\" (UID: \"8b1be84e-cdc5-433a-a684-b9938901a03a\") " pod="openstack/dnsmasq-dns-cb6ffcf87-vngt8" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.015219 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8b1be84e-cdc5-433a-a684-b9938901a03a-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-vngt8\" (UID: \"8b1be84e-cdc5-433a-a684-b9938901a03a\") " pod="openstack/dnsmasq-dns-cb6ffcf87-vngt8" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.015312 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b1be84e-cdc5-433a-a684-b9938901a03a-config\") pod \"dnsmasq-dns-cb6ffcf87-vngt8\" (UID: \"8b1be84e-cdc5-433a-a684-b9938901a03a\") " pod="openstack/dnsmasq-dns-cb6ffcf87-vngt8" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.015378 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b1be84e-cdc5-433a-a684-b9938901a03a-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-vngt8\" (UID: \"8b1be84e-cdc5-433a-a684-b9938901a03a\") " pod="openstack/dnsmasq-dns-cb6ffcf87-vngt8" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.015573 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b1be84e-cdc5-433a-a684-b9938901a03a-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-vngt8\" (UID: \"8b1be84e-cdc5-433a-a684-b9938901a03a\") " pod="openstack/dnsmasq-dns-cb6ffcf87-vngt8" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.016391 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b1be84e-cdc5-433a-a684-b9938901a03a-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-vngt8\" (UID: \"8b1be84e-cdc5-433a-a684-b9938901a03a\") " pod="openstack/dnsmasq-dns-cb6ffcf87-vngt8" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.017385 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b1be84e-cdc5-433a-a684-b9938901a03a-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-vngt8\" (UID: \"8b1be84e-cdc5-433a-a684-b9938901a03a\") " pod="openstack/dnsmasq-dns-cb6ffcf87-vngt8" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.017998 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b1be84e-cdc5-433a-a684-b9938901a03a-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-vngt8\" (UID: \"8b1be84e-cdc5-433a-a684-b9938901a03a\") " pod="openstack/dnsmasq-dns-cb6ffcf87-vngt8" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.018491 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b1be84e-cdc5-433a-a684-b9938901a03a-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-vngt8\" (UID: \"8b1be84e-cdc5-433a-a684-b9938901a03a\") " pod="openstack/dnsmasq-dns-cb6ffcf87-vngt8" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.019430 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8b1be84e-cdc5-433a-a684-b9938901a03a-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-vngt8\" (UID: \"8b1be84e-cdc5-433a-a684-b9938901a03a\") " pod="openstack/dnsmasq-dns-cb6ffcf87-vngt8" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.019634 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b1be84e-cdc5-433a-a684-b9938901a03a-config\") pod \"dnsmasq-dns-cb6ffcf87-vngt8\" (UID: \"8b1be84e-cdc5-433a-a684-b9938901a03a\") " pod="openstack/dnsmasq-dns-cb6ffcf87-vngt8" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.037051 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c7kx\" (UniqueName: \"kubernetes.io/projected/8b1be84e-cdc5-433a-a684-b9938901a03a-kube-api-access-9c7kx\") pod \"dnsmasq-dns-cb6ffcf87-vngt8\" (UID: \"8b1be84e-cdc5-433a-a684-b9938901a03a\") " pod="openstack/dnsmasq-dns-cb6ffcf87-vngt8" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.095573 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb6ffcf87-vngt8" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.227358 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-jllfk" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.320986 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dfe26ed5-d004-4505-8bfc-f72e530f121e-ovsdbserver-sb\") pod \"dfe26ed5-d004-4505-8bfc-f72e530f121e\" (UID: \"dfe26ed5-d004-4505-8bfc-f72e530f121e\") " Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.321928 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfe26ed5-d004-4505-8bfc-f72e530f121e-ovsdbserver-nb\") pod \"dfe26ed5-d004-4505-8bfc-f72e530f121e\" (UID: \"dfe26ed5-d004-4505-8bfc-f72e530f121e\") " Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.321970 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dfe26ed5-d004-4505-8bfc-f72e530f121e-dns-swift-storage-0\") pod \"dfe26ed5-d004-4505-8bfc-f72e530f121e\" (UID: \"dfe26ed5-d004-4505-8bfc-f72e530f121e\") " Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.322071 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmh22\" (UniqueName: \"kubernetes.io/projected/dfe26ed5-d004-4505-8bfc-f72e530f121e-kube-api-access-hmh22\") pod \"dfe26ed5-d004-4505-8bfc-f72e530f121e\" (UID: \"dfe26ed5-d004-4505-8bfc-f72e530f121e\") " Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.322107 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfe26ed5-d004-4505-8bfc-f72e530f121e-config\") pod \"dfe26ed5-d004-4505-8bfc-f72e530f121e\" (UID: \"dfe26ed5-d004-4505-8bfc-f72e530f121e\") " Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.322185 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dfe26ed5-d004-4505-8bfc-f72e530f121e-dns-svc\") pod \"dfe26ed5-d004-4505-8bfc-f72e530f121e\" (UID: \"dfe26ed5-d004-4505-8bfc-f72e530f121e\") " Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.343877 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfe26ed5-d004-4505-8bfc-f72e530f121e-kube-api-access-hmh22" (OuterVolumeSpecName: "kube-api-access-hmh22") pod "dfe26ed5-d004-4505-8bfc-f72e530f121e" (UID: "dfe26ed5-d004-4505-8bfc-f72e530f121e"). InnerVolumeSpecName "kube-api-access-hmh22". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.377289 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfe26ed5-d004-4505-8bfc-f72e530f121e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dfe26ed5-d004-4505-8bfc-f72e530f121e" (UID: "dfe26ed5-d004-4505-8bfc-f72e530f121e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.381286 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfe26ed5-d004-4505-8bfc-f72e530f121e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dfe26ed5-d004-4505-8bfc-f72e530f121e" (UID: "dfe26ed5-d004-4505-8bfc-f72e530f121e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.394536 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfe26ed5-d004-4505-8bfc-f72e530f121e-config" (OuterVolumeSpecName: "config") pod "dfe26ed5-d004-4505-8bfc-f72e530f121e" (UID: "dfe26ed5-d004-4505-8bfc-f72e530f121e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.395629 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfe26ed5-d004-4505-8bfc-f72e530f121e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dfe26ed5-d004-4505-8bfc-f72e530f121e" (UID: "dfe26ed5-d004-4505-8bfc-f72e530f121e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.397137 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfe26ed5-d004-4505-8bfc-f72e530f121e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dfe26ed5-d004-4505-8bfc-f72e530f121e" (UID: "dfe26ed5-d004-4505-8bfc-f72e530f121e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.424412 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dfe26ed5-d004-4505-8bfc-f72e530f121e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.424441 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfe26ed5-d004-4505-8bfc-f72e530f121e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.424544 4725 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dfe26ed5-d004-4505-8bfc-f72e530f121e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.424565 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmh22\" (UniqueName: \"kubernetes.io/projected/dfe26ed5-d004-4505-8bfc-f72e530f121e-kube-api-access-hmh22\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.424576 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfe26ed5-d004-4505-8bfc-f72e530f121e-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.424585 4725 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dfe26ed5-d004-4505-8bfc-f72e530f121e-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.549601 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-vngt8"] Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.576609 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-vngt8" event={"ID":"8b1be84e-cdc5-433a-a684-b9938901a03a","Type":"ContainerStarted","Data":"c1c217dace6f89982d56b68c5f06f623f97efc06bcfbed5fac58891ec8736e71"} Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.578824 4725 generic.go:334] "Generic (PLEG): container finished" podID="dfe26ed5-d004-4505-8bfc-f72e530f121e" containerID="6f170435a0c4ba96604923f83dc198318b3d4834f0b531dfc900c739a7c2877a" exitCode=0 Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.578867 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-jllfk" event={"ID":"dfe26ed5-d004-4505-8bfc-f72e530f121e","Type":"ContainerDied","Data":"6f170435a0c4ba96604923f83dc198318b3d4834f0b531dfc900c739a7c2877a"} Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.578894 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-jllfk" event={"ID":"dfe26ed5-d004-4505-8bfc-f72e530f121e","Type":"ContainerDied","Data":"618222870cf763bfd67ee8ed41704ad8fe8587d8dfff17be5978ef07c571ad54"} Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.578909 4725 scope.go:117] "RemoveContainer" containerID="6f170435a0c4ba96604923f83dc198318b3d4834f0b531dfc900c739a7c2877a" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.579025 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-jllfk" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.615233 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-jllfk"] Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.628560 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-jllfk"] Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.634443 4725 scope.go:117] "RemoveContainer" containerID="6e30c31de2a42ce4b3042cf696bf6c99fe7ad3cab41da45995f8bc4e1bdcf1d1" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.659330 4725 scope.go:117] "RemoveContainer" containerID="6f170435a0c4ba96604923f83dc198318b3d4834f0b531dfc900c739a7c2877a" Oct 14 13:36:21 crc kubenswrapper[4725]: E1014 13:36:21.660075 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f170435a0c4ba96604923f83dc198318b3d4834f0b531dfc900c739a7c2877a\": container with ID starting with 6f170435a0c4ba96604923f83dc198318b3d4834f0b531dfc900c739a7c2877a not found: ID does not exist" containerID="6f170435a0c4ba96604923f83dc198318b3d4834f0b531dfc900c739a7c2877a" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.660131 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f170435a0c4ba96604923f83dc198318b3d4834f0b531dfc900c739a7c2877a"} err="failed to get container status \"6f170435a0c4ba96604923f83dc198318b3d4834f0b531dfc900c739a7c2877a\": rpc error: code = NotFound desc = could not find container \"6f170435a0c4ba96604923f83dc198318b3d4834f0b531dfc900c739a7c2877a\": container with ID starting with 6f170435a0c4ba96604923f83dc198318b3d4834f0b531dfc900c739a7c2877a not found: ID does not exist" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.660165 4725 scope.go:117] "RemoveContainer" containerID="6e30c31de2a42ce4b3042cf696bf6c99fe7ad3cab41da45995f8bc4e1bdcf1d1" Oct 14 13:36:21 crc kubenswrapper[4725]: E1014 13:36:21.660906 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e30c31de2a42ce4b3042cf696bf6c99fe7ad3cab41da45995f8bc4e1bdcf1d1\": container with ID starting with 6e30c31de2a42ce4b3042cf696bf6c99fe7ad3cab41da45995f8bc4e1bdcf1d1 not found: ID does not exist" containerID="6e30c31de2a42ce4b3042cf696bf6c99fe7ad3cab41da45995f8bc4e1bdcf1d1" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.660966 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e30c31de2a42ce4b3042cf696bf6c99fe7ad3cab41da45995f8bc4e1bdcf1d1"} err="failed to get container status \"6e30c31de2a42ce4b3042cf696bf6c99fe7ad3cab41da45995f8bc4e1bdcf1d1\": rpc error: code = NotFound desc = could not find container \"6e30c31de2a42ce4b3042cf696bf6c99fe7ad3cab41da45995f8bc4e1bdcf1d1\": container with ID starting with 6e30c31de2a42ce4b3042cf696bf6c99fe7ad3cab41da45995f8bc4e1bdcf1d1 not found: ID does not exist" Oct 14 13:36:21 crc kubenswrapper[4725]: I1014 13:36:21.932092 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfe26ed5-d004-4505-8bfc-f72e530f121e" path="/var/lib/kubelet/pods/dfe26ed5-d004-4505-8bfc-f72e530f121e/volumes" Oct 14 13:36:22 crc kubenswrapper[4725]: I1014 13:36:22.589917 4725 generic.go:334] "Generic (PLEG): container finished" podID="8b1be84e-cdc5-433a-a684-b9938901a03a" containerID="4d26fb8be877ff09a824acb9f5f9c83234fc13c6b08025c3c6a2ff03bea26503" exitCode=0 Oct 14 13:36:22 crc kubenswrapper[4725]: I1014 13:36:22.589956 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-vngt8" event={"ID":"8b1be84e-cdc5-433a-a684-b9938901a03a","Type":"ContainerDied","Data":"4d26fb8be877ff09a824acb9f5f9c83234fc13c6b08025c3c6a2ff03bea26503"} Oct 14 13:36:23 crc kubenswrapper[4725]: I1014 13:36:23.607490 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-vngt8" event={"ID":"8b1be84e-cdc5-433a-a684-b9938901a03a","Type":"ContainerStarted","Data":"8accffc9bb54153c5f19e8ac5dea19fb1070105284feb524ae92a1f15f8c06e7"} Oct 14 13:36:23 crc kubenswrapper[4725]: I1014 13:36:23.607969 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cb6ffcf87-vngt8" Oct 14 13:36:23 crc kubenswrapper[4725]: I1014 13:36:23.640181 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cb6ffcf87-vngt8" podStartSLOduration=3.64016155 podStartE2EDuration="3.64016155s" podCreationTimestamp="2025-10-14 13:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:36:23.628706553 +0000 UTC m=+1300.477141382" watchObservedRunningTime="2025-10-14 13:36:23.64016155 +0000 UTC m=+1300.488596369" Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.098068 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cb6ffcf87-vngt8" Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.192736 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-6544m"] Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.193149 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67b789f86c-6544m" podUID="212b7499-1ace-4744-be66-62bf9f3eb853" containerName="dnsmasq-dns" containerID="cri-o://515c655ff933dfe41eb866c3d6dc8a22fa5376c8c321bb1876b3fc6b36dca74b" gracePeriod=10 Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.668156 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-6544m" Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.696771 4725 generic.go:334] "Generic (PLEG): container finished" podID="212b7499-1ace-4744-be66-62bf9f3eb853" containerID="515c655ff933dfe41eb866c3d6dc8a22fa5376c8c321bb1876b3fc6b36dca74b" exitCode=0 Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.696883 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-6544m" Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.696912 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-6544m" event={"ID":"212b7499-1ace-4744-be66-62bf9f3eb853","Type":"ContainerDied","Data":"515c655ff933dfe41eb866c3d6dc8a22fa5376c8c321bb1876b3fc6b36dca74b"} Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.697194 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-6544m" event={"ID":"212b7499-1ace-4744-be66-62bf9f3eb853","Type":"ContainerDied","Data":"3b5ab5a23dea270498d166eda05b18411267f2be767bf29c4ba277c7cbda1cf6"} Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.697244 4725 scope.go:117] "RemoveContainer" containerID="515c655ff933dfe41eb866c3d6dc8a22fa5376c8c321bb1876b3fc6b36dca74b" Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.716489 4725 scope.go:117] "RemoveContainer" containerID="447f63500b582adf0dff406506a5ea587f6cf1d387e65566f711aa3f133a8fef" Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.732012 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfrhr\" (UniqueName: \"kubernetes.io/projected/212b7499-1ace-4744-be66-62bf9f3eb853-kube-api-access-pfrhr\") pod \"212b7499-1ace-4744-be66-62bf9f3eb853\" (UID: \"212b7499-1ace-4744-be66-62bf9f3eb853\") " Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.732103 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-openstack-edpm-ipam\") pod \"212b7499-1ace-4744-be66-62bf9f3eb853\" (UID: \"212b7499-1ace-4744-be66-62bf9f3eb853\") " Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.732189 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-config\") pod \"212b7499-1ace-4744-be66-62bf9f3eb853\" (UID: \"212b7499-1ace-4744-be66-62bf9f3eb853\") " Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.732215 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-ovsdbserver-sb\") pod \"212b7499-1ace-4744-be66-62bf9f3eb853\" (UID: \"212b7499-1ace-4744-be66-62bf9f3eb853\") " Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.732280 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-dns-svc\") pod \"212b7499-1ace-4744-be66-62bf9f3eb853\" (UID: \"212b7499-1ace-4744-be66-62bf9f3eb853\") " Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.732297 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-ovsdbserver-nb\") pod \"212b7499-1ace-4744-be66-62bf9f3eb853\" (UID: \"212b7499-1ace-4744-be66-62bf9f3eb853\") " Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.732344 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-dns-swift-storage-0\") pod \"212b7499-1ace-4744-be66-62bf9f3eb853\" (UID: \"212b7499-1ace-4744-be66-62bf9f3eb853\") " Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.740778 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/212b7499-1ace-4744-be66-62bf9f3eb853-kube-api-access-pfrhr" (OuterVolumeSpecName: "kube-api-access-pfrhr") pod "212b7499-1ace-4744-be66-62bf9f3eb853" (UID: "212b7499-1ace-4744-be66-62bf9f3eb853"). InnerVolumeSpecName "kube-api-access-pfrhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.743322 4725 scope.go:117] "RemoveContainer" containerID="515c655ff933dfe41eb866c3d6dc8a22fa5376c8c321bb1876b3fc6b36dca74b" Oct 14 13:36:31 crc kubenswrapper[4725]: E1014 13:36:31.743831 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"515c655ff933dfe41eb866c3d6dc8a22fa5376c8c321bb1876b3fc6b36dca74b\": container with ID starting with 515c655ff933dfe41eb866c3d6dc8a22fa5376c8c321bb1876b3fc6b36dca74b not found: ID does not exist" containerID="515c655ff933dfe41eb866c3d6dc8a22fa5376c8c321bb1876b3fc6b36dca74b" Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.743881 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"515c655ff933dfe41eb866c3d6dc8a22fa5376c8c321bb1876b3fc6b36dca74b"} err="failed to get container status \"515c655ff933dfe41eb866c3d6dc8a22fa5376c8c321bb1876b3fc6b36dca74b\": rpc error: code = NotFound desc = could not find container \"515c655ff933dfe41eb866c3d6dc8a22fa5376c8c321bb1876b3fc6b36dca74b\": container with ID starting with 515c655ff933dfe41eb866c3d6dc8a22fa5376c8c321bb1876b3fc6b36dca74b not found: ID does not exist" Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.743913 4725 scope.go:117] "RemoveContainer" containerID="447f63500b582adf0dff406506a5ea587f6cf1d387e65566f711aa3f133a8fef" Oct 14 13:36:31 crc kubenswrapper[4725]: E1014 13:36:31.744436 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"447f63500b582adf0dff406506a5ea587f6cf1d387e65566f711aa3f133a8fef\": container with ID starting with 447f63500b582adf0dff406506a5ea587f6cf1d387e65566f711aa3f133a8fef not found: ID does not exist" containerID="447f63500b582adf0dff406506a5ea587f6cf1d387e65566f711aa3f133a8fef" Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.744583 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"447f63500b582adf0dff406506a5ea587f6cf1d387e65566f711aa3f133a8fef"} err="failed to get container status \"447f63500b582adf0dff406506a5ea587f6cf1d387e65566f711aa3f133a8fef\": rpc error: code = NotFound desc = could not find container \"447f63500b582adf0dff406506a5ea587f6cf1d387e65566f711aa3f133a8fef\": container with ID starting with 447f63500b582adf0dff406506a5ea587f6cf1d387e65566f711aa3f133a8fef not found: ID does not exist" Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.787957 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-config" (OuterVolumeSpecName: "config") pod "212b7499-1ace-4744-be66-62bf9f3eb853" (UID: "212b7499-1ace-4744-be66-62bf9f3eb853"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.789936 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "212b7499-1ace-4744-be66-62bf9f3eb853" (UID: "212b7499-1ace-4744-be66-62bf9f3eb853"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.793601 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "212b7499-1ace-4744-be66-62bf9f3eb853" (UID: "212b7499-1ace-4744-be66-62bf9f3eb853"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.797650 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "212b7499-1ace-4744-be66-62bf9f3eb853" (UID: "212b7499-1ace-4744-be66-62bf9f3eb853"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.800400 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "212b7499-1ace-4744-be66-62bf9f3eb853" (UID: "212b7499-1ace-4744-be66-62bf9f3eb853"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.802595 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "212b7499-1ace-4744-be66-62bf9f3eb853" (UID: "212b7499-1ace-4744-be66-62bf9f3eb853"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.834750 4725 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.834802 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.834816 4725 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.834828 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfrhr\" (UniqueName: \"kubernetes.io/projected/212b7499-1ace-4744-be66-62bf9f3eb853-kube-api-access-pfrhr\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.834839 4725 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.834849 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-config\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:31 crc kubenswrapper[4725]: I1014 13:36:31.834859 4725 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/212b7499-1ace-4744-be66-62bf9f3eb853-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 14 13:36:32 crc kubenswrapper[4725]: I1014 13:36:32.028778 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-6544m"] Oct 14 13:36:32 crc kubenswrapper[4725]: I1014 13:36:32.036983 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-6544m"] Oct 14 13:36:32 crc kubenswrapper[4725]: I1014 13:36:32.520953 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 13:36:32 crc kubenswrapper[4725]: I1014 13:36:32.521021 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 13:36:33 crc kubenswrapper[4725]: I1014 13:36:33.961601 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="212b7499-1ace-4744-be66-62bf9f3eb853" path="/var/lib/kubelet/pods/212b7499-1ace-4744-be66-62bf9f3eb853/volumes" Oct 14 13:36:45 crc kubenswrapper[4725]: I1014 13:36:45.857329 4725 generic.go:334] "Generic (PLEG): container finished" podID="0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e" containerID="44fedbcd66c5bd6d2b047fccf601458c240d8d51dd194fe2591c8c6f783933c2" exitCode=0 Oct 14 13:36:45 crc kubenswrapper[4725]: I1014 13:36:45.857399 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e","Type":"ContainerDied","Data":"44fedbcd66c5bd6d2b047fccf601458c240d8d51dd194fe2591c8c6f783933c2"} Oct 14 13:36:45 crc kubenswrapper[4725]: I1014 13:36:45.861111 4725 generic.go:334] "Generic (PLEG): container finished" podID="cbf19e88-e140-4357-8255-fdc507d7db52" containerID="504389b9ae87243cfe73e13f7040f03726da475cf8b1689da9974bd741677a65" exitCode=0 Oct 14 13:36:45 crc kubenswrapper[4725]: I1014 13:36:45.861254 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cbf19e88-e140-4357-8255-fdc507d7db52","Type":"ContainerDied","Data":"504389b9ae87243cfe73e13f7040f03726da475cf8b1689da9974bd741677a65"} Oct 14 13:36:46 crc kubenswrapper[4725]: I1014 13:36:46.872507 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e","Type":"ContainerStarted","Data":"0dee5086284e1bd52488ac7100d095bc1d04f91aea53112adc2e870ee3a7eb68"} Oct 14 13:36:46 crc kubenswrapper[4725]: I1014 13:36:46.875262 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:36:46 crc kubenswrapper[4725]: I1014 13:36:46.878628 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cbf19e88-e140-4357-8255-fdc507d7db52","Type":"ContainerStarted","Data":"20eac4718e74687c19f2de7aead400a32f08833cf376413b75e7dc608cd63106"} Oct 14 13:36:46 crc kubenswrapper[4725]: I1014 13:36:46.878885 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 14 13:36:46 crc kubenswrapper[4725]: I1014 13:36:46.906596 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.906575392 podStartE2EDuration="36.906575392s" podCreationTimestamp="2025-10-14 13:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:36:46.901885052 +0000 UTC m=+1323.750319861" watchObservedRunningTime="2025-10-14 13:36:46.906575392 +0000 UTC m=+1323.755010201" Oct 14 13:36:46 crc kubenswrapper[4725]: I1014 13:36:46.926742 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.92672609 podStartE2EDuration="37.92672609s" podCreationTimestamp="2025-10-14 13:36:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 13:36:46.926058741 +0000 UTC m=+1323.774493550" watchObservedRunningTime="2025-10-14 13:36:46.92672609 +0000 UTC m=+1323.775160899" Oct 14 13:36:48 crc kubenswrapper[4725]: I1014 13:36:48.164023 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn"] Oct 14 13:36:48 crc kubenswrapper[4725]: E1014 13:36:48.164708 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfe26ed5-d004-4505-8bfc-f72e530f121e" containerName="init" Oct 14 13:36:48 crc kubenswrapper[4725]: I1014 13:36:48.164724 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfe26ed5-d004-4505-8bfc-f72e530f121e" containerName="init" Oct 14 13:36:48 crc kubenswrapper[4725]: E1014 13:36:48.164737 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="212b7499-1ace-4744-be66-62bf9f3eb853" containerName="dnsmasq-dns" Oct 14 13:36:48 crc kubenswrapper[4725]: I1014 13:36:48.164743 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="212b7499-1ace-4744-be66-62bf9f3eb853" containerName="dnsmasq-dns" Oct 14 13:36:48 crc kubenswrapper[4725]: E1014 13:36:48.164764 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfe26ed5-d004-4505-8bfc-f72e530f121e" containerName="dnsmasq-dns" Oct 14 13:36:48 crc kubenswrapper[4725]: I1014 13:36:48.164772 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfe26ed5-d004-4505-8bfc-f72e530f121e" containerName="dnsmasq-dns" Oct 14 13:36:48 crc kubenswrapper[4725]: E1014 13:36:48.164786 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="212b7499-1ace-4744-be66-62bf9f3eb853" containerName="init" Oct 14 13:36:48 crc kubenswrapper[4725]: I1014 13:36:48.164792 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="212b7499-1ace-4744-be66-62bf9f3eb853" containerName="init" Oct 14 13:36:48 crc kubenswrapper[4725]: I1014 13:36:48.164967 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="212b7499-1ace-4744-be66-62bf9f3eb853" containerName="dnsmasq-dns" Oct 14 13:36:48 crc kubenswrapper[4725]: I1014 13:36:48.164984 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfe26ed5-d004-4505-8bfc-f72e530f121e" containerName="dnsmasq-dns" Oct 14 13:36:48 crc kubenswrapper[4725]: I1014 13:36:48.165602 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn" Oct 14 13:36:48 crc kubenswrapper[4725]: I1014 13:36:48.174050 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn"] Oct 14 13:36:48 crc kubenswrapper[4725]: I1014 13:36:48.178196 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 13:36:48 crc kubenswrapper[4725]: I1014 13:36:48.178559 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qnvbv" Oct 14 13:36:48 crc kubenswrapper[4725]: I1014 13:36:48.178625 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 13:36:48 crc kubenswrapper[4725]: I1014 13:36:48.181789 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 13:36:48 crc kubenswrapper[4725]: I1014 13:36:48.270385 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f2164a-fbb3-4515-8732-723ea2301364-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn\" (UID: \"16f2164a-fbb3-4515-8732-723ea2301364\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn" Oct 14 13:36:48 crc kubenswrapper[4725]: I1014 13:36:48.270533 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvdqc\" (UniqueName: \"kubernetes.io/projected/16f2164a-fbb3-4515-8732-723ea2301364-kube-api-access-bvdqc\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn\" (UID: \"16f2164a-fbb3-4515-8732-723ea2301364\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn" Oct 14 13:36:48 crc kubenswrapper[4725]: I1014 13:36:48.270571 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16f2164a-fbb3-4515-8732-723ea2301364-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn\" (UID: \"16f2164a-fbb3-4515-8732-723ea2301364\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn" Oct 14 13:36:48 crc kubenswrapper[4725]: I1014 13:36:48.270617 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/16f2164a-fbb3-4515-8732-723ea2301364-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn\" (UID: \"16f2164a-fbb3-4515-8732-723ea2301364\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn" Oct 14 13:36:48 crc kubenswrapper[4725]: I1014 13:36:48.372977 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f2164a-fbb3-4515-8732-723ea2301364-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn\" (UID: \"16f2164a-fbb3-4515-8732-723ea2301364\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn" Oct 14 13:36:48 crc kubenswrapper[4725]: I1014 13:36:48.373126 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvdqc\" (UniqueName: \"kubernetes.io/projected/16f2164a-fbb3-4515-8732-723ea2301364-kube-api-access-bvdqc\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn\" (UID: \"16f2164a-fbb3-4515-8732-723ea2301364\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn" Oct 14 13:36:48 crc kubenswrapper[4725]: I1014 13:36:48.373163 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16f2164a-fbb3-4515-8732-723ea2301364-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn\" (UID: \"16f2164a-fbb3-4515-8732-723ea2301364\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn" Oct 14 13:36:48 crc kubenswrapper[4725]: I1014 13:36:48.373210 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/16f2164a-fbb3-4515-8732-723ea2301364-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn\" (UID: \"16f2164a-fbb3-4515-8732-723ea2301364\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn" Oct 14 13:36:48 crc kubenswrapper[4725]: I1014 13:36:48.378903 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16f2164a-fbb3-4515-8732-723ea2301364-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn\" (UID: \"16f2164a-fbb3-4515-8732-723ea2301364\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn" Oct 14 13:36:48 crc kubenswrapper[4725]: I1014 13:36:48.379309 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f2164a-fbb3-4515-8732-723ea2301364-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn\" (UID: \"16f2164a-fbb3-4515-8732-723ea2301364\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn" Oct 14 13:36:48 crc kubenswrapper[4725]: I1014 13:36:48.388609 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/16f2164a-fbb3-4515-8732-723ea2301364-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn\" (UID: \"16f2164a-fbb3-4515-8732-723ea2301364\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn" Oct 14 13:36:48 crc kubenswrapper[4725]: I1014 13:36:48.396440 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvdqc\" (UniqueName: \"kubernetes.io/projected/16f2164a-fbb3-4515-8732-723ea2301364-kube-api-access-bvdqc\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn\" (UID: \"16f2164a-fbb3-4515-8732-723ea2301364\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn" Oct 14 13:36:48 crc kubenswrapper[4725]: I1014 13:36:48.486530 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn" Oct 14 13:36:49 crc kubenswrapper[4725]: I1014 13:36:49.041479 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn"] Oct 14 13:36:49 crc kubenswrapper[4725]: W1014 13:36:49.054366 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16f2164a_fbb3_4515_8732_723ea2301364.slice/crio-7484427ca0999f23fc70a50b602102bcc651fc095e30dc5e2f708e8748f08dfc WatchSource:0}: Error finding container 7484427ca0999f23fc70a50b602102bcc651fc095e30dc5e2f708e8748f08dfc: Status 404 returned error can't find the container with id 7484427ca0999f23fc70a50b602102bcc651fc095e30dc5e2f708e8748f08dfc Oct 14 13:36:49 crc kubenswrapper[4725]: I1014 13:36:49.056972 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 13:36:49 crc kubenswrapper[4725]: I1014 13:36:49.914743 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn" event={"ID":"16f2164a-fbb3-4515-8732-723ea2301364","Type":"ContainerStarted","Data":"7484427ca0999f23fc70a50b602102bcc651fc095e30dc5e2f708e8748f08dfc"} Oct 14 13:36:57 crc kubenswrapper[4725]: I1014 13:36:57.986212 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn" event={"ID":"16f2164a-fbb3-4515-8732-723ea2301364","Type":"ContainerStarted","Data":"5835651185d50281fb1fceabb2b7fdaee0160e7c238e68a2ac5d5c55908c2b1e"} Oct 14 13:36:58 crc kubenswrapper[4725]: I1014 13:36:58.017974 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn" podStartSLOduration=1.569301059 podStartE2EDuration="10.017946702s" podCreationTimestamp="2025-10-14 13:36:48 +0000 UTC" firstStartedPulling="2025-10-14 13:36:49.056736971 +0000 UTC m=+1325.905171780" lastFinishedPulling="2025-10-14 13:36:57.505382604 +0000 UTC m=+1334.353817423" observedRunningTime="2025-10-14 13:36:58.00556273 +0000 UTC m=+1334.853997559" watchObservedRunningTime="2025-10-14 13:36:58.017946702 +0000 UTC m=+1334.866381531" Oct 14 13:37:00 crc kubenswrapper[4725]: I1014 13:37:00.106688 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 14 13:37:00 crc kubenswrapper[4725]: I1014 13:37:00.801506 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 14 13:37:02 crc kubenswrapper[4725]: I1014 13:37:02.520355 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 13:37:02 crc kubenswrapper[4725]: I1014 13:37:02.520408 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 13:37:24 crc kubenswrapper[4725]: I1014 13:37:24.265574 4725 generic.go:334] "Generic (PLEG): container finished" podID="16f2164a-fbb3-4515-8732-723ea2301364" containerID="5835651185d50281fb1fceabb2b7fdaee0160e7c238e68a2ac5d5c55908c2b1e" exitCode=0 Oct 14 13:37:24 crc kubenswrapper[4725]: I1014 13:37:24.266466 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn" event={"ID":"16f2164a-fbb3-4515-8732-723ea2301364","Type":"ContainerDied","Data":"5835651185d50281fb1fceabb2b7fdaee0160e7c238e68a2ac5d5c55908c2b1e"} Oct 14 13:37:25 crc kubenswrapper[4725]: I1014 13:37:25.869348 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn" Oct 14 13:37:25 crc kubenswrapper[4725]: I1014 13:37:25.974666 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16f2164a-fbb3-4515-8732-723ea2301364-inventory\") pod \"16f2164a-fbb3-4515-8732-723ea2301364\" (UID: \"16f2164a-fbb3-4515-8732-723ea2301364\") " Oct 14 13:37:25 crc kubenswrapper[4725]: I1014 13:37:25.974790 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/16f2164a-fbb3-4515-8732-723ea2301364-ssh-key\") pod \"16f2164a-fbb3-4515-8732-723ea2301364\" (UID: \"16f2164a-fbb3-4515-8732-723ea2301364\") " Oct 14 13:37:25 crc kubenswrapper[4725]: I1014 13:37:25.974942 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvdqc\" (UniqueName: \"kubernetes.io/projected/16f2164a-fbb3-4515-8732-723ea2301364-kube-api-access-bvdqc\") pod \"16f2164a-fbb3-4515-8732-723ea2301364\" (UID: \"16f2164a-fbb3-4515-8732-723ea2301364\") " Oct 14 13:37:25 crc kubenswrapper[4725]: I1014 13:37:25.975003 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f2164a-fbb3-4515-8732-723ea2301364-repo-setup-combined-ca-bundle\") pod \"16f2164a-fbb3-4515-8732-723ea2301364\" (UID: \"16f2164a-fbb3-4515-8732-723ea2301364\") " Oct 14 13:37:25 crc kubenswrapper[4725]: I1014 13:37:25.980125 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16f2164a-fbb3-4515-8732-723ea2301364-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "16f2164a-fbb3-4515-8732-723ea2301364" (UID: "16f2164a-fbb3-4515-8732-723ea2301364"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:37:25 crc kubenswrapper[4725]: I1014 13:37:25.981426 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16f2164a-fbb3-4515-8732-723ea2301364-kube-api-access-bvdqc" (OuterVolumeSpecName: "kube-api-access-bvdqc") pod "16f2164a-fbb3-4515-8732-723ea2301364" (UID: "16f2164a-fbb3-4515-8732-723ea2301364"). InnerVolumeSpecName "kube-api-access-bvdqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:37:26 crc kubenswrapper[4725]: I1014 13:37:26.001075 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16f2164a-fbb3-4515-8732-723ea2301364-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "16f2164a-fbb3-4515-8732-723ea2301364" (UID: "16f2164a-fbb3-4515-8732-723ea2301364"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:37:26 crc kubenswrapper[4725]: I1014 13:37:26.011228 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16f2164a-fbb3-4515-8732-723ea2301364-inventory" (OuterVolumeSpecName: "inventory") pod "16f2164a-fbb3-4515-8732-723ea2301364" (UID: "16f2164a-fbb3-4515-8732-723ea2301364"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:37:26 crc kubenswrapper[4725]: I1014 13:37:26.077073 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvdqc\" (UniqueName: \"kubernetes.io/projected/16f2164a-fbb3-4515-8732-723ea2301364-kube-api-access-bvdqc\") on node \"crc\" DevicePath \"\"" Oct 14 13:37:26 crc kubenswrapper[4725]: I1014 13:37:26.077103 4725 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f2164a-fbb3-4515-8732-723ea2301364-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:37:26 crc kubenswrapper[4725]: I1014 13:37:26.077113 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16f2164a-fbb3-4515-8732-723ea2301364-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 13:37:26 crc kubenswrapper[4725]: I1014 13:37:26.077124 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/16f2164a-fbb3-4515-8732-723ea2301364-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 13:37:26 crc kubenswrapper[4725]: I1014 13:37:26.297284 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn" event={"ID":"16f2164a-fbb3-4515-8732-723ea2301364","Type":"ContainerDied","Data":"7484427ca0999f23fc70a50b602102bcc651fc095e30dc5e2f708e8748f08dfc"} Oct 14 13:37:26 crc kubenswrapper[4725]: I1014 13:37:26.297360 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7484427ca0999f23fc70a50b602102bcc651fc095e30dc5e2f708e8748f08dfc" Oct 14 13:37:26 crc kubenswrapper[4725]: I1014 13:37:26.297373 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn" Oct 14 13:37:26 crc kubenswrapper[4725]: I1014 13:37:26.385901 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-z9g9d"] Oct 14 13:37:26 crc kubenswrapper[4725]: E1014 13:37:26.386347 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16f2164a-fbb3-4515-8732-723ea2301364" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 14 13:37:26 crc kubenswrapper[4725]: I1014 13:37:26.386399 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="16f2164a-fbb3-4515-8732-723ea2301364" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 14 13:37:26 crc kubenswrapper[4725]: I1014 13:37:26.386654 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="16f2164a-fbb3-4515-8732-723ea2301364" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 14 13:37:26 crc kubenswrapper[4725]: I1014 13:37:26.387366 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z9g9d" Oct 14 13:37:26 crc kubenswrapper[4725]: I1014 13:37:26.389954 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 13:37:26 crc kubenswrapper[4725]: I1014 13:37:26.389993 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 13:37:26 crc kubenswrapper[4725]: I1014 13:37:26.390113 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qnvbv" Oct 14 13:37:26 crc kubenswrapper[4725]: I1014 13:37:26.391892 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 13:37:26 crc kubenswrapper[4725]: I1014 13:37:26.402750 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-z9g9d"] Oct 14 13:37:26 crc kubenswrapper[4725]: I1014 13:37:26.484000 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e7dc888-e069-4f97-84b9-02e9f37aec6c-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-z9g9d\" (UID: \"0e7dc888-e069-4f97-84b9-02e9f37aec6c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z9g9d" Oct 14 13:37:26 crc kubenswrapper[4725]: I1014 13:37:26.484093 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxq5s\" (UniqueName: \"kubernetes.io/projected/0e7dc888-e069-4f97-84b9-02e9f37aec6c-kube-api-access-kxq5s\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-z9g9d\" (UID: \"0e7dc888-e069-4f97-84b9-02e9f37aec6c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z9g9d" Oct 14 13:37:26 crc kubenswrapper[4725]: I1014 13:37:26.484831 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e7dc888-e069-4f97-84b9-02e9f37aec6c-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-z9g9d\" (UID: \"0e7dc888-e069-4f97-84b9-02e9f37aec6c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z9g9d" Oct 14 13:37:26 crc kubenswrapper[4725]: I1014 13:37:26.586902 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e7dc888-e069-4f97-84b9-02e9f37aec6c-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-z9g9d\" (UID: \"0e7dc888-e069-4f97-84b9-02e9f37aec6c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z9g9d" Oct 14 13:37:26 crc kubenswrapper[4725]: I1014 13:37:26.586995 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e7dc888-e069-4f97-84b9-02e9f37aec6c-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-z9g9d\" (UID: \"0e7dc888-e069-4f97-84b9-02e9f37aec6c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z9g9d" Oct 14 13:37:26 crc kubenswrapper[4725]: I1014 13:37:26.587030 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxq5s\" (UniqueName: \"kubernetes.io/projected/0e7dc888-e069-4f97-84b9-02e9f37aec6c-kube-api-access-kxq5s\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-z9g9d\" (UID: \"0e7dc888-e069-4f97-84b9-02e9f37aec6c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z9g9d" Oct 14 13:37:26 crc kubenswrapper[4725]: I1014 13:37:26.595122 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e7dc888-e069-4f97-84b9-02e9f37aec6c-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-z9g9d\" (UID: \"0e7dc888-e069-4f97-84b9-02e9f37aec6c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z9g9d" Oct 14 13:37:26 crc kubenswrapper[4725]: I1014 13:37:26.595259 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e7dc888-e069-4f97-84b9-02e9f37aec6c-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-z9g9d\" (UID: \"0e7dc888-e069-4f97-84b9-02e9f37aec6c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z9g9d" Oct 14 13:37:26 crc kubenswrapper[4725]: I1014 13:37:26.613067 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxq5s\" (UniqueName: \"kubernetes.io/projected/0e7dc888-e069-4f97-84b9-02e9f37aec6c-kube-api-access-kxq5s\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-z9g9d\" (UID: \"0e7dc888-e069-4f97-84b9-02e9f37aec6c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z9g9d" Oct 14 13:37:26 crc kubenswrapper[4725]: I1014 13:37:26.708227 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z9g9d" Oct 14 13:37:27 crc kubenswrapper[4725]: I1014 13:37:27.326156 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-z9g9d"] Oct 14 13:37:28 crc kubenswrapper[4725]: I1014 13:37:28.330006 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z9g9d" event={"ID":"0e7dc888-e069-4f97-84b9-02e9f37aec6c","Type":"ContainerStarted","Data":"4a40df4a17f495bc37b2171b1b223cb29d10764f74cc86f3aa4f4b886f22e414"} Oct 14 13:37:29 crc kubenswrapper[4725]: I1014 13:37:29.345891 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z9g9d" event={"ID":"0e7dc888-e069-4f97-84b9-02e9f37aec6c","Type":"ContainerStarted","Data":"7225b1cfde70f049b61d29f63d9d1d0d06d3aea9d49d927bf571a3bdd280afc0"} Oct 14 13:37:29 crc kubenswrapper[4725]: I1014 13:37:29.374595 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z9g9d" podStartSLOduration=2.226913354 podStartE2EDuration="3.374567532s" podCreationTimestamp="2025-10-14 13:37:26 +0000 UTC" firstStartedPulling="2025-10-14 13:37:27.336196523 +0000 UTC m=+1364.184631342" lastFinishedPulling="2025-10-14 13:37:28.483850681 +0000 UTC m=+1365.332285520" observedRunningTime="2025-10-14 13:37:29.365238412 +0000 UTC m=+1366.213673241" watchObservedRunningTime="2025-10-14 13:37:29.374567532 +0000 UTC m=+1366.223002351" Oct 14 13:37:32 crc kubenswrapper[4725]: I1014 13:37:32.372888 4725 generic.go:334] "Generic (PLEG): container finished" podID="0e7dc888-e069-4f97-84b9-02e9f37aec6c" containerID="7225b1cfde70f049b61d29f63d9d1d0d06d3aea9d49d927bf571a3bdd280afc0" exitCode=0 Oct 14 13:37:32 crc kubenswrapper[4725]: I1014 13:37:32.372939 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z9g9d" event={"ID":"0e7dc888-e069-4f97-84b9-02e9f37aec6c","Type":"ContainerDied","Data":"7225b1cfde70f049b61d29f63d9d1d0d06d3aea9d49d927bf571a3bdd280afc0"} Oct 14 13:37:32 crc kubenswrapper[4725]: I1014 13:37:32.521012 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 13:37:32 crc kubenswrapper[4725]: I1014 13:37:32.521095 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 13:37:32 crc kubenswrapper[4725]: I1014 13:37:32.521162 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" Oct 14 13:37:32 crc kubenswrapper[4725]: I1014 13:37:32.522094 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2ed28316bb88cfbc90574ba8c972086f633bc4c19d223765ba69ed10f863412c"} pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 13:37:32 crc kubenswrapper[4725]: I1014 13:37:32.522167 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" containerID="cri-o://2ed28316bb88cfbc90574ba8c972086f633bc4c19d223765ba69ed10f863412c" gracePeriod=600 Oct 14 13:37:33 crc kubenswrapper[4725]: I1014 13:37:33.387718 4725 generic.go:334] "Generic (PLEG): container finished" podID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerID="2ed28316bb88cfbc90574ba8c972086f633bc4c19d223765ba69ed10f863412c" exitCode=0 Oct 14 13:37:33 crc kubenswrapper[4725]: I1014 13:37:33.387787 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" event={"ID":"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c","Type":"ContainerDied","Data":"2ed28316bb88cfbc90574ba8c972086f633bc4c19d223765ba69ed10f863412c"} Oct 14 13:37:33 crc kubenswrapper[4725]: I1014 13:37:33.388091 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" event={"ID":"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c","Type":"ContainerStarted","Data":"6f767896ac68228fe09840fae7ddd8f8c1d269f215009c642a1baacf3a7849d4"} Oct 14 13:37:33 crc kubenswrapper[4725]: I1014 13:37:33.388121 4725 scope.go:117] "RemoveContainer" containerID="ea00d1d4fd499f409fe49f6dd2f54e9fa910b8219b121324d8eb5e1f54150712" Oct 14 13:37:33 crc kubenswrapper[4725]: I1014 13:37:33.785388 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z9g9d" Oct 14 13:37:33 crc kubenswrapper[4725]: I1014 13:37:33.943920 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxq5s\" (UniqueName: \"kubernetes.io/projected/0e7dc888-e069-4f97-84b9-02e9f37aec6c-kube-api-access-kxq5s\") pod \"0e7dc888-e069-4f97-84b9-02e9f37aec6c\" (UID: \"0e7dc888-e069-4f97-84b9-02e9f37aec6c\") " Oct 14 13:37:33 crc kubenswrapper[4725]: I1014 13:37:33.944036 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e7dc888-e069-4f97-84b9-02e9f37aec6c-inventory\") pod \"0e7dc888-e069-4f97-84b9-02e9f37aec6c\" (UID: \"0e7dc888-e069-4f97-84b9-02e9f37aec6c\") " Oct 14 13:37:33 crc kubenswrapper[4725]: I1014 13:37:33.944256 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e7dc888-e069-4f97-84b9-02e9f37aec6c-ssh-key\") pod \"0e7dc888-e069-4f97-84b9-02e9f37aec6c\" (UID: \"0e7dc888-e069-4f97-84b9-02e9f37aec6c\") " Oct 14 13:37:33 crc kubenswrapper[4725]: I1014 13:37:33.952762 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e7dc888-e069-4f97-84b9-02e9f37aec6c-kube-api-access-kxq5s" (OuterVolumeSpecName: "kube-api-access-kxq5s") pod "0e7dc888-e069-4f97-84b9-02e9f37aec6c" (UID: "0e7dc888-e069-4f97-84b9-02e9f37aec6c"). InnerVolumeSpecName "kube-api-access-kxq5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:37:33 crc kubenswrapper[4725]: I1014 13:37:33.980509 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e7dc888-e069-4f97-84b9-02e9f37aec6c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0e7dc888-e069-4f97-84b9-02e9f37aec6c" (UID: "0e7dc888-e069-4f97-84b9-02e9f37aec6c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:37:34 crc kubenswrapper[4725]: I1014 13:37:34.001621 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e7dc888-e069-4f97-84b9-02e9f37aec6c-inventory" (OuterVolumeSpecName: "inventory") pod "0e7dc888-e069-4f97-84b9-02e9f37aec6c" (UID: "0e7dc888-e069-4f97-84b9-02e9f37aec6c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:37:34 crc kubenswrapper[4725]: I1014 13:37:34.046593 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxq5s\" (UniqueName: \"kubernetes.io/projected/0e7dc888-e069-4f97-84b9-02e9f37aec6c-kube-api-access-kxq5s\") on node \"crc\" DevicePath \"\"" Oct 14 13:37:34 crc kubenswrapper[4725]: I1014 13:37:34.046624 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e7dc888-e069-4f97-84b9-02e9f37aec6c-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 13:37:34 crc kubenswrapper[4725]: I1014 13:37:34.046634 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e7dc888-e069-4f97-84b9-02e9f37aec6c-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 13:37:34 crc kubenswrapper[4725]: I1014 13:37:34.406639 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z9g9d" event={"ID":"0e7dc888-e069-4f97-84b9-02e9f37aec6c","Type":"ContainerDied","Data":"4a40df4a17f495bc37b2171b1b223cb29d10764f74cc86f3aa4f4b886f22e414"} Oct 14 13:37:34 crc kubenswrapper[4725]: I1014 13:37:34.406970 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a40df4a17f495bc37b2171b1b223cb29d10764f74cc86f3aa4f4b886f22e414" Oct 14 13:37:34 crc kubenswrapper[4725]: I1014 13:37:34.406781 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-z9g9d" Oct 14 13:37:34 crc kubenswrapper[4725]: I1014 13:37:34.487064 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8"] Oct 14 13:37:34 crc kubenswrapper[4725]: E1014 13:37:34.487615 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e7dc888-e069-4f97-84b9-02e9f37aec6c" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Oct 14 13:37:34 crc kubenswrapper[4725]: I1014 13:37:34.487636 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e7dc888-e069-4f97-84b9-02e9f37aec6c" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Oct 14 13:37:34 crc kubenswrapper[4725]: I1014 13:37:34.487891 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e7dc888-e069-4f97-84b9-02e9f37aec6c" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Oct 14 13:37:34 crc kubenswrapper[4725]: I1014 13:37:34.488659 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8" Oct 14 13:37:34 crc kubenswrapper[4725]: I1014 13:37:34.491808 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 13:37:34 crc kubenswrapper[4725]: I1014 13:37:34.492472 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 13:37:34 crc kubenswrapper[4725]: I1014 13:37:34.492764 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qnvbv" Oct 14 13:37:34 crc kubenswrapper[4725]: I1014 13:37:34.493028 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 13:37:34 crc kubenswrapper[4725]: I1014 13:37:34.510086 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8"] Oct 14 13:37:34 crc kubenswrapper[4725]: I1014 13:37:34.557392 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fb7769c-8d15-4925-8f3a-3f8e5a81fd72-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8\" (UID: \"7fb7769c-8d15-4925-8f3a-3f8e5a81fd72\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8" Oct 14 13:37:34 crc kubenswrapper[4725]: I1014 13:37:34.557733 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lkvd\" (UniqueName: \"kubernetes.io/projected/7fb7769c-8d15-4925-8f3a-3f8e5a81fd72-kube-api-access-5lkvd\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8\" (UID: \"7fb7769c-8d15-4925-8f3a-3f8e5a81fd72\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8" Oct 14 13:37:34 crc kubenswrapper[4725]: I1014 13:37:34.558121 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7fb7769c-8d15-4925-8f3a-3f8e5a81fd72-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8\" (UID: \"7fb7769c-8d15-4925-8f3a-3f8e5a81fd72\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8" Oct 14 13:37:34 crc kubenswrapper[4725]: I1014 13:37:34.558388 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fb7769c-8d15-4925-8f3a-3f8e5a81fd72-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8\" (UID: \"7fb7769c-8d15-4925-8f3a-3f8e5a81fd72\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8" Oct 14 13:37:34 crc kubenswrapper[4725]: I1014 13:37:34.660325 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lkvd\" (UniqueName: \"kubernetes.io/projected/7fb7769c-8d15-4925-8f3a-3f8e5a81fd72-kube-api-access-5lkvd\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8\" (UID: \"7fb7769c-8d15-4925-8f3a-3f8e5a81fd72\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8" Oct 14 13:37:34 crc kubenswrapper[4725]: I1014 13:37:34.660398 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7fb7769c-8d15-4925-8f3a-3f8e5a81fd72-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8\" (UID: \"7fb7769c-8d15-4925-8f3a-3f8e5a81fd72\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8" Oct 14 13:37:34 crc kubenswrapper[4725]: I1014 13:37:34.660457 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fb7769c-8d15-4925-8f3a-3f8e5a81fd72-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8\" (UID: \"7fb7769c-8d15-4925-8f3a-3f8e5a81fd72\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8" Oct 14 13:37:34 crc kubenswrapper[4725]: I1014 13:37:34.660479 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fb7769c-8d15-4925-8f3a-3f8e5a81fd72-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8\" (UID: \"7fb7769c-8d15-4925-8f3a-3f8e5a81fd72\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8" Oct 14 13:37:34 crc kubenswrapper[4725]: I1014 13:37:34.665976 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fb7769c-8d15-4925-8f3a-3f8e5a81fd72-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8\" (UID: \"7fb7769c-8d15-4925-8f3a-3f8e5a81fd72\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8" Oct 14 13:37:34 crc kubenswrapper[4725]: I1014 13:37:34.665976 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7fb7769c-8d15-4925-8f3a-3f8e5a81fd72-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8\" (UID: \"7fb7769c-8d15-4925-8f3a-3f8e5a81fd72\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8" Oct 14 13:37:34 crc kubenswrapper[4725]: I1014 13:37:34.666516 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fb7769c-8d15-4925-8f3a-3f8e5a81fd72-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8\" (UID: \"7fb7769c-8d15-4925-8f3a-3f8e5a81fd72\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8" Oct 14 13:37:34 crc kubenswrapper[4725]: I1014 13:37:34.685171 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lkvd\" (UniqueName: \"kubernetes.io/projected/7fb7769c-8d15-4925-8f3a-3f8e5a81fd72-kube-api-access-5lkvd\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8\" (UID: \"7fb7769c-8d15-4925-8f3a-3f8e5a81fd72\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8" Oct 14 13:37:34 crc kubenswrapper[4725]: I1014 13:37:34.814790 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8" Oct 14 13:37:35 crc kubenswrapper[4725]: I1014 13:37:35.388247 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8"] Oct 14 13:37:35 crc kubenswrapper[4725]: I1014 13:37:35.416256 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8" event={"ID":"7fb7769c-8d15-4925-8f3a-3f8e5a81fd72","Type":"ContainerStarted","Data":"f3fcb7c13e9ed5fd3cde7007d6305ef02f7ccbe4c677c6cd70f6ec7db2db6ce7"} Oct 14 13:37:36 crc kubenswrapper[4725]: I1014 13:37:36.436839 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8" event={"ID":"7fb7769c-8d15-4925-8f3a-3f8e5a81fd72","Type":"ContainerStarted","Data":"c8c999465237885d0a75925c58888ede5aa8ac801591b23d4c2075f674e74a39"} Oct 14 13:38:11 crc kubenswrapper[4725]: I1014 13:38:11.806791 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8" podStartSLOduration=37.266064418 podStartE2EDuration="37.806764701s" podCreationTimestamp="2025-10-14 13:37:34 +0000 UTC" firstStartedPulling="2025-10-14 13:37:35.394438992 +0000 UTC m=+1372.242873811" lastFinishedPulling="2025-10-14 13:37:35.935139255 +0000 UTC m=+1372.783574094" observedRunningTime="2025-10-14 13:37:36.465358655 +0000 UTC m=+1373.313793494" watchObservedRunningTime="2025-10-14 13:38:11.806764701 +0000 UTC m=+1408.655199520" Oct 14 13:38:11 crc kubenswrapper[4725]: I1014 13:38:11.810736 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m4qsf"] Oct 14 13:38:11 crc kubenswrapper[4725]: I1014 13:38:11.813016 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4qsf" Oct 14 13:38:11 crc kubenswrapper[4725]: I1014 13:38:11.826211 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4qsf"] Oct 14 13:38:11 crc kubenswrapper[4725]: I1014 13:38:11.968803 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7k4x\" (UniqueName: \"kubernetes.io/projected/da8f45ef-2dbd-42b4-ab32-41fdbc941304-kube-api-access-r7k4x\") pod \"redhat-marketplace-m4qsf\" (UID: \"da8f45ef-2dbd-42b4-ab32-41fdbc941304\") " pod="openshift-marketplace/redhat-marketplace-m4qsf" Oct 14 13:38:11 crc kubenswrapper[4725]: I1014 13:38:11.968911 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da8f45ef-2dbd-42b4-ab32-41fdbc941304-catalog-content\") pod \"redhat-marketplace-m4qsf\" (UID: \"da8f45ef-2dbd-42b4-ab32-41fdbc941304\") " pod="openshift-marketplace/redhat-marketplace-m4qsf" Oct 14 13:38:11 crc kubenswrapper[4725]: I1014 13:38:11.969029 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da8f45ef-2dbd-42b4-ab32-41fdbc941304-utilities\") pod \"redhat-marketplace-m4qsf\" (UID: \"da8f45ef-2dbd-42b4-ab32-41fdbc941304\") " pod="openshift-marketplace/redhat-marketplace-m4qsf" Oct 14 13:38:12 crc kubenswrapper[4725]: I1014 13:38:12.070566 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da8f45ef-2dbd-42b4-ab32-41fdbc941304-catalog-content\") pod \"redhat-marketplace-m4qsf\" (UID: \"da8f45ef-2dbd-42b4-ab32-41fdbc941304\") " pod="openshift-marketplace/redhat-marketplace-m4qsf" Oct 14 13:38:12 crc kubenswrapper[4725]: I1014 13:38:12.071041 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da8f45ef-2dbd-42b4-ab32-41fdbc941304-catalog-content\") pod \"redhat-marketplace-m4qsf\" (UID: \"da8f45ef-2dbd-42b4-ab32-41fdbc941304\") " pod="openshift-marketplace/redhat-marketplace-m4qsf" Oct 14 13:38:12 crc kubenswrapper[4725]: I1014 13:38:12.071117 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da8f45ef-2dbd-42b4-ab32-41fdbc941304-utilities\") pod \"redhat-marketplace-m4qsf\" (UID: \"da8f45ef-2dbd-42b4-ab32-41fdbc941304\") " pod="openshift-marketplace/redhat-marketplace-m4qsf" Oct 14 13:38:12 crc kubenswrapper[4725]: I1014 13:38:12.071311 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7k4x\" (UniqueName: \"kubernetes.io/projected/da8f45ef-2dbd-42b4-ab32-41fdbc941304-kube-api-access-r7k4x\") pod \"redhat-marketplace-m4qsf\" (UID: \"da8f45ef-2dbd-42b4-ab32-41fdbc941304\") " pod="openshift-marketplace/redhat-marketplace-m4qsf" Oct 14 13:38:12 crc kubenswrapper[4725]: I1014 13:38:12.071696 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da8f45ef-2dbd-42b4-ab32-41fdbc941304-utilities\") pod \"redhat-marketplace-m4qsf\" (UID: \"da8f45ef-2dbd-42b4-ab32-41fdbc941304\") " pod="openshift-marketplace/redhat-marketplace-m4qsf" Oct 14 13:38:12 crc kubenswrapper[4725]: I1014 13:38:12.094494 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7k4x\" (UniqueName: \"kubernetes.io/projected/da8f45ef-2dbd-42b4-ab32-41fdbc941304-kube-api-access-r7k4x\") pod \"redhat-marketplace-m4qsf\" (UID: \"da8f45ef-2dbd-42b4-ab32-41fdbc941304\") " pod="openshift-marketplace/redhat-marketplace-m4qsf" Oct 14 13:38:12 crc kubenswrapper[4725]: I1014 13:38:12.152778 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4qsf" Oct 14 13:38:12 crc kubenswrapper[4725]: I1014 13:38:12.629089 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4qsf"] Oct 14 13:38:12 crc kubenswrapper[4725]: I1014 13:38:12.855701 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4qsf" event={"ID":"da8f45ef-2dbd-42b4-ab32-41fdbc941304","Type":"ContainerStarted","Data":"e328ef03b0eae5f44a278c4affa066dadb611e4b9f10d6e01a82154e1bba0938"} Oct 14 13:38:13 crc kubenswrapper[4725]: I1014 13:38:13.866675 4725 generic.go:334] "Generic (PLEG): container finished" podID="da8f45ef-2dbd-42b4-ab32-41fdbc941304" containerID="810915fd51ebb261d3d2cda0b422d683bc9a112ec288b03dab67936a6b8240e5" exitCode=0 Oct 14 13:38:13 crc kubenswrapper[4725]: I1014 13:38:13.866723 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4qsf" event={"ID":"da8f45ef-2dbd-42b4-ab32-41fdbc941304","Type":"ContainerDied","Data":"810915fd51ebb261d3d2cda0b422d683bc9a112ec288b03dab67936a6b8240e5"} Oct 14 13:38:15 crc kubenswrapper[4725]: I1014 13:38:15.893576 4725 generic.go:334] "Generic (PLEG): container finished" podID="da8f45ef-2dbd-42b4-ab32-41fdbc941304" containerID="eccf70830a4dfcab7f8714d7cd4a36c35d5d9990d4982972c110b1f97e93a899" exitCode=0 Oct 14 13:38:15 crc kubenswrapper[4725]: I1014 13:38:15.894323 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4qsf" event={"ID":"da8f45ef-2dbd-42b4-ab32-41fdbc941304","Type":"ContainerDied","Data":"eccf70830a4dfcab7f8714d7cd4a36c35d5d9990d4982972c110b1f97e93a899"} Oct 14 13:38:16 crc kubenswrapper[4725]: I1014 13:38:16.402229 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qvtws"] Oct 14 13:38:16 crc kubenswrapper[4725]: I1014 13:38:16.408924 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvtws" Oct 14 13:38:16 crc kubenswrapper[4725]: I1014 13:38:16.431065 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qvtws"] Oct 14 13:38:16 crc kubenswrapper[4725]: I1014 13:38:16.557676 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkvjs\" (UniqueName: \"kubernetes.io/projected/3df4eb28-16e5-4156-a438-6333b006728b-kube-api-access-vkvjs\") pod \"redhat-operators-qvtws\" (UID: \"3df4eb28-16e5-4156-a438-6333b006728b\") " pod="openshift-marketplace/redhat-operators-qvtws" Oct 14 13:38:16 crc kubenswrapper[4725]: I1014 13:38:16.558078 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3df4eb28-16e5-4156-a438-6333b006728b-utilities\") pod \"redhat-operators-qvtws\" (UID: \"3df4eb28-16e5-4156-a438-6333b006728b\") " pod="openshift-marketplace/redhat-operators-qvtws" Oct 14 13:38:16 crc kubenswrapper[4725]: I1014 13:38:16.558136 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3df4eb28-16e5-4156-a438-6333b006728b-catalog-content\") pod \"redhat-operators-qvtws\" (UID: \"3df4eb28-16e5-4156-a438-6333b006728b\") " pod="openshift-marketplace/redhat-operators-qvtws" Oct 14 13:38:16 crc kubenswrapper[4725]: I1014 13:38:16.659739 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkvjs\" (UniqueName: \"kubernetes.io/projected/3df4eb28-16e5-4156-a438-6333b006728b-kube-api-access-vkvjs\") pod \"redhat-operators-qvtws\" (UID: \"3df4eb28-16e5-4156-a438-6333b006728b\") " pod="openshift-marketplace/redhat-operators-qvtws" Oct 14 13:38:16 crc kubenswrapper[4725]: I1014 13:38:16.659873 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3df4eb28-16e5-4156-a438-6333b006728b-utilities\") pod \"redhat-operators-qvtws\" (UID: \"3df4eb28-16e5-4156-a438-6333b006728b\") " pod="openshift-marketplace/redhat-operators-qvtws" Oct 14 13:38:16 crc kubenswrapper[4725]: I1014 13:38:16.659898 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3df4eb28-16e5-4156-a438-6333b006728b-catalog-content\") pod \"redhat-operators-qvtws\" (UID: \"3df4eb28-16e5-4156-a438-6333b006728b\") " pod="openshift-marketplace/redhat-operators-qvtws" Oct 14 13:38:16 crc kubenswrapper[4725]: I1014 13:38:16.660579 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3df4eb28-16e5-4156-a438-6333b006728b-utilities\") pod \"redhat-operators-qvtws\" (UID: \"3df4eb28-16e5-4156-a438-6333b006728b\") " pod="openshift-marketplace/redhat-operators-qvtws" Oct 14 13:38:16 crc kubenswrapper[4725]: I1014 13:38:16.660587 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3df4eb28-16e5-4156-a438-6333b006728b-catalog-content\") pod \"redhat-operators-qvtws\" (UID: \"3df4eb28-16e5-4156-a438-6333b006728b\") " pod="openshift-marketplace/redhat-operators-qvtws" Oct 14 13:38:16 crc kubenswrapper[4725]: I1014 13:38:16.680790 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkvjs\" (UniqueName: \"kubernetes.io/projected/3df4eb28-16e5-4156-a438-6333b006728b-kube-api-access-vkvjs\") pod \"redhat-operators-qvtws\" (UID: \"3df4eb28-16e5-4156-a438-6333b006728b\") " pod="openshift-marketplace/redhat-operators-qvtws" Oct 14 13:38:16 crc kubenswrapper[4725]: I1014 13:38:16.737014 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvtws" Oct 14 13:38:17 crc kubenswrapper[4725]: I1014 13:38:17.232734 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qvtws"] Oct 14 13:38:17 crc kubenswrapper[4725]: W1014 13:38:17.239629 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3df4eb28_16e5_4156_a438_6333b006728b.slice/crio-4c8054091a3038053369f05f2eca96f47352df7f0a3d687e3cec78c6287f8ad8 WatchSource:0}: Error finding container 4c8054091a3038053369f05f2eca96f47352df7f0a3d687e3cec78c6287f8ad8: Status 404 returned error can't find the container with id 4c8054091a3038053369f05f2eca96f47352df7f0a3d687e3cec78c6287f8ad8 Oct 14 13:38:17 crc kubenswrapper[4725]: I1014 13:38:17.917436 4725 generic.go:334] "Generic (PLEG): container finished" podID="3df4eb28-16e5-4156-a438-6333b006728b" containerID="0a817572fff7c8c702cced5a21897841f16a195e27c873c8b93095bd5a983c0b" exitCode=0 Oct 14 13:38:17 crc kubenswrapper[4725]: I1014 13:38:17.917584 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvtws" event={"ID":"3df4eb28-16e5-4156-a438-6333b006728b","Type":"ContainerDied","Data":"0a817572fff7c8c702cced5a21897841f16a195e27c873c8b93095bd5a983c0b"} Oct 14 13:38:17 crc kubenswrapper[4725]: I1014 13:38:17.918255 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvtws" event={"ID":"3df4eb28-16e5-4156-a438-6333b006728b","Type":"ContainerStarted","Data":"4c8054091a3038053369f05f2eca96f47352df7f0a3d687e3cec78c6287f8ad8"} Oct 14 13:38:17 crc kubenswrapper[4725]: I1014 13:38:17.933810 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4qsf" event={"ID":"da8f45ef-2dbd-42b4-ab32-41fdbc941304","Type":"ContainerStarted","Data":"ede327d4e10a7274f73c86ff67494368e9be8e5770c7d8fa2bf5b505c9b81b88"} Oct 14 13:38:17 crc kubenswrapper[4725]: I1014 13:38:17.964869 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m4qsf" podStartSLOduration=4.050240697 podStartE2EDuration="6.964847874s" podCreationTimestamp="2025-10-14 13:38:11 +0000 UTC" firstStartedPulling="2025-10-14 13:38:13.868787208 +0000 UTC m=+1410.717222027" lastFinishedPulling="2025-10-14 13:38:16.783394395 +0000 UTC m=+1413.631829204" observedRunningTime="2025-10-14 13:38:17.957286684 +0000 UTC m=+1414.805721493" watchObservedRunningTime="2025-10-14 13:38:17.964847874 +0000 UTC m=+1414.813282683" Oct 14 13:38:19 crc kubenswrapper[4725]: I1014 13:38:19.945442 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvtws" event={"ID":"3df4eb28-16e5-4156-a438-6333b006728b","Type":"ContainerStarted","Data":"654947e77e31fec0db42c6a2a99909f01bf3aff7266c5c1d6be5d4bcae0fbed3"} Oct 14 13:38:20 crc kubenswrapper[4725]: I1014 13:38:20.958546 4725 generic.go:334] "Generic (PLEG): container finished" podID="3df4eb28-16e5-4156-a438-6333b006728b" containerID="654947e77e31fec0db42c6a2a99909f01bf3aff7266c5c1d6be5d4bcae0fbed3" exitCode=0 Oct 14 13:38:20 crc kubenswrapper[4725]: I1014 13:38:20.958671 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvtws" event={"ID":"3df4eb28-16e5-4156-a438-6333b006728b","Type":"ContainerDied","Data":"654947e77e31fec0db42c6a2a99909f01bf3aff7266c5c1d6be5d4bcae0fbed3"} Oct 14 13:38:22 crc kubenswrapper[4725]: I1014 13:38:22.153024 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m4qsf" Oct 14 13:38:22 crc kubenswrapper[4725]: I1014 13:38:22.153371 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m4qsf" Oct 14 13:38:22 crc kubenswrapper[4725]: I1014 13:38:22.201551 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m4qsf" Oct 14 13:38:22 crc kubenswrapper[4725]: I1014 13:38:22.984151 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvtws" event={"ID":"3df4eb28-16e5-4156-a438-6333b006728b","Type":"ContainerStarted","Data":"304451b0efeb2dbb0afe63cf94afe4aec559e0c24498449eb779192ef9d082bb"} Oct 14 13:38:23 crc kubenswrapper[4725]: I1014 13:38:23.016212 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qvtws" podStartSLOduration=3.09014198 podStartE2EDuration="7.016190047s" podCreationTimestamp="2025-10-14 13:38:16 +0000 UTC" firstStartedPulling="2025-10-14 13:38:17.919310667 +0000 UTC m=+1414.767745476" lastFinishedPulling="2025-10-14 13:38:21.845358724 +0000 UTC m=+1418.693793543" observedRunningTime="2025-10-14 13:38:23.007422623 +0000 UTC m=+1419.855857472" watchObservedRunningTime="2025-10-14 13:38:23.016190047 +0000 UTC m=+1419.864624866" Oct 14 13:38:23 crc kubenswrapper[4725]: I1014 13:38:23.058816 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m4qsf" Oct 14 13:38:23 crc kubenswrapper[4725]: I1014 13:38:23.589982 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4qsf"] Oct 14 13:38:25 crc kubenswrapper[4725]: I1014 13:38:25.008287 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m4qsf" podUID="da8f45ef-2dbd-42b4-ab32-41fdbc941304" containerName="registry-server" containerID="cri-o://ede327d4e10a7274f73c86ff67494368e9be8e5770c7d8fa2bf5b505c9b81b88" gracePeriod=2 Oct 14 13:38:25 crc kubenswrapper[4725]: I1014 13:38:25.451010 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4qsf" Oct 14 13:38:25 crc kubenswrapper[4725]: I1014 13:38:25.527817 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7k4x\" (UniqueName: \"kubernetes.io/projected/da8f45ef-2dbd-42b4-ab32-41fdbc941304-kube-api-access-r7k4x\") pod \"da8f45ef-2dbd-42b4-ab32-41fdbc941304\" (UID: \"da8f45ef-2dbd-42b4-ab32-41fdbc941304\") " Oct 14 13:38:25 crc kubenswrapper[4725]: I1014 13:38:25.527986 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da8f45ef-2dbd-42b4-ab32-41fdbc941304-utilities\") pod \"da8f45ef-2dbd-42b4-ab32-41fdbc941304\" (UID: \"da8f45ef-2dbd-42b4-ab32-41fdbc941304\") " Oct 14 13:38:25 crc kubenswrapper[4725]: I1014 13:38:25.528045 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da8f45ef-2dbd-42b4-ab32-41fdbc941304-catalog-content\") pod \"da8f45ef-2dbd-42b4-ab32-41fdbc941304\" (UID: \"da8f45ef-2dbd-42b4-ab32-41fdbc941304\") " Oct 14 13:38:25 crc kubenswrapper[4725]: I1014 13:38:25.528967 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da8f45ef-2dbd-42b4-ab32-41fdbc941304-utilities" (OuterVolumeSpecName: "utilities") pod "da8f45ef-2dbd-42b4-ab32-41fdbc941304" (UID: "da8f45ef-2dbd-42b4-ab32-41fdbc941304"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:38:25 crc kubenswrapper[4725]: I1014 13:38:25.536907 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da8f45ef-2dbd-42b4-ab32-41fdbc941304-kube-api-access-r7k4x" (OuterVolumeSpecName: "kube-api-access-r7k4x") pod "da8f45ef-2dbd-42b4-ab32-41fdbc941304" (UID: "da8f45ef-2dbd-42b4-ab32-41fdbc941304"). InnerVolumeSpecName "kube-api-access-r7k4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:38:25 crc kubenswrapper[4725]: I1014 13:38:25.544394 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da8f45ef-2dbd-42b4-ab32-41fdbc941304-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da8f45ef-2dbd-42b4-ab32-41fdbc941304" (UID: "da8f45ef-2dbd-42b4-ab32-41fdbc941304"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:38:25 crc kubenswrapper[4725]: I1014 13:38:25.630666 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da8f45ef-2dbd-42b4-ab32-41fdbc941304-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 13:38:25 crc kubenswrapper[4725]: I1014 13:38:25.630701 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da8f45ef-2dbd-42b4-ab32-41fdbc941304-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 13:38:25 crc kubenswrapper[4725]: I1014 13:38:25.630718 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7k4x\" (UniqueName: \"kubernetes.io/projected/da8f45ef-2dbd-42b4-ab32-41fdbc941304-kube-api-access-r7k4x\") on node \"crc\" DevicePath \"\"" Oct 14 13:38:26 crc kubenswrapper[4725]: I1014 13:38:26.031608 4725 generic.go:334] "Generic (PLEG): container finished" podID="da8f45ef-2dbd-42b4-ab32-41fdbc941304" containerID="ede327d4e10a7274f73c86ff67494368e9be8e5770c7d8fa2bf5b505c9b81b88" exitCode=0 Oct 14 13:38:26 crc kubenswrapper[4725]: I1014 13:38:26.031663 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4qsf" event={"ID":"da8f45ef-2dbd-42b4-ab32-41fdbc941304","Type":"ContainerDied","Data":"ede327d4e10a7274f73c86ff67494368e9be8e5770c7d8fa2bf5b505c9b81b88"} Oct 14 13:38:26 crc kubenswrapper[4725]: I1014 13:38:26.031700 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4qsf" event={"ID":"da8f45ef-2dbd-42b4-ab32-41fdbc941304","Type":"ContainerDied","Data":"e328ef03b0eae5f44a278c4affa066dadb611e4b9f10d6e01a82154e1bba0938"} Oct 14 13:38:26 crc kubenswrapper[4725]: I1014 13:38:26.031726 4725 scope.go:117] "RemoveContainer" containerID="ede327d4e10a7274f73c86ff67494368e9be8e5770c7d8fa2bf5b505c9b81b88" Oct 14 13:38:26 crc kubenswrapper[4725]: I1014 13:38:26.031763 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4qsf" Oct 14 13:38:26 crc kubenswrapper[4725]: I1014 13:38:26.064952 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4qsf"] Oct 14 13:38:26 crc kubenswrapper[4725]: I1014 13:38:26.073039 4725 scope.go:117] "RemoveContainer" containerID="eccf70830a4dfcab7f8714d7cd4a36c35d5d9990d4982972c110b1f97e93a899" Oct 14 13:38:26 crc kubenswrapper[4725]: I1014 13:38:26.075173 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4qsf"] Oct 14 13:38:26 crc kubenswrapper[4725]: I1014 13:38:26.097232 4725 scope.go:117] "RemoveContainer" containerID="810915fd51ebb261d3d2cda0b422d683bc9a112ec288b03dab67936a6b8240e5" Oct 14 13:38:26 crc kubenswrapper[4725]: I1014 13:38:26.146283 4725 scope.go:117] "RemoveContainer" containerID="ede327d4e10a7274f73c86ff67494368e9be8e5770c7d8fa2bf5b505c9b81b88" Oct 14 13:38:26 crc kubenswrapper[4725]: E1014 13:38:26.146875 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ede327d4e10a7274f73c86ff67494368e9be8e5770c7d8fa2bf5b505c9b81b88\": container with ID starting with ede327d4e10a7274f73c86ff67494368e9be8e5770c7d8fa2bf5b505c9b81b88 not found: ID does not exist" containerID="ede327d4e10a7274f73c86ff67494368e9be8e5770c7d8fa2bf5b505c9b81b88" Oct 14 13:38:26 crc kubenswrapper[4725]: I1014 13:38:26.146921 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ede327d4e10a7274f73c86ff67494368e9be8e5770c7d8fa2bf5b505c9b81b88"} err="failed to get container status \"ede327d4e10a7274f73c86ff67494368e9be8e5770c7d8fa2bf5b505c9b81b88\": rpc error: code = NotFound desc = could not find container \"ede327d4e10a7274f73c86ff67494368e9be8e5770c7d8fa2bf5b505c9b81b88\": container with ID starting with ede327d4e10a7274f73c86ff67494368e9be8e5770c7d8fa2bf5b505c9b81b88 not found: ID does not exist" Oct 14 13:38:26 crc kubenswrapper[4725]: I1014 13:38:26.146948 4725 scope.go:117] "RemoveContainer" containerID="eccf70830a4dfcab7f8714d7cd4a36c35d5d9990d4982972c110b1f97e93a899" Oct 14 13:38:26 crc kubenswrapper[4725]: E1014 13:38:26.147425 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eccf70830a4dfcab7f8714d7cd4a36c35d5d9990d4982972c110b1f97e93a899\": container with ID starting with eccf70830a4dfcab7f8714d7cd4a36c35d5d9990d4982972c110b1f97e93a899 not found: ID does not exist" containerID="eccf70830a4dfcab7f8714d7cd4a36c35d5d9990d4982972c110b1f97e93a899" Oct 14 13:38:26 crc kubenswrapper[4725]: I1014 13:38:26.147487 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eccf70830a4dfcab7f8714d7cd4a36c35d5d9990d4982972c110b1f97e93a899"} err="failed to get container status \"eccf70830a4dfcab7f8714d7cd4a36c35d5d9990d4982972c110b1f97e93a899\": rpc error: code = NotFound desc = could not find container \"eccf70830a4dfcab7f8714d7cd4a36c35d5d9990d4982972c110b1f97e93a899\": container with ID starting with eccf70830a4dfcab7f8714d7cd4a36c35d5d9990d4982972c110b1f97e93a899 not found: ID does not exist" Oct 14 13:38:26 crc kubenswrapper[4725]: I1014 13:38:26.147514 4725 scope.go:117] "RemoveContainer" containerID="810915fd51ebb261d3d2cda0b422d683bc9a112ec288b03dab67936a6b8240e5" Oct 14 13:38:26 crc kubenswrapper[4725]: E1014 13:38:26.147789 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"810915fd51ebb261d3d2cda0b422d683bc9a112ec288b03dab67936a6b8240e5\": container with ID starting with 810915fd51ebb261d3d2cda0b422d683bc9a112ec288b03dab67936a6b8240e5 not found: ID does not exist" containerID="810915fd51ebb261d3d2cda0b422d683bc9a112ec288b03dab67936a6b8240e5" Oct 14 13:38:26 crc kubenswrapper[4725]: I1014 13:38:26.147816 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"810915fd51ebb261d3d2cda0b422d683bc9a112ec288b03dab67936a6b8240e5"} err="failed to get container status \"810915fd51ebb261d3d2cda0b422d683bc9a112ec288b03dab67936a6b8240e5\": rpc error: code = NotFound desc = could not find container \"810915fd51ebb261d3d2cda0b422d683bc9a112ec288b03dab67936a6b8240e5\": container with ID starting with 810915fd51ebb261d3d2cda0b422d683bc9a112ec288b03dab67936a6b8240e5 not found: ID does not exist" Oct 14 13:38:26 crc kubenswrapper[4725]: I1014 13:38:26.738223 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qvtws" Oct 14 13:38:26 crc kubenswrapper[4725]: I1014 13:38:26.738352 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qvtws" Oct 14 13:38:27 crc kubenswrapper[4725]: I1014 13:38:27.797854 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qvtws" podUID="3df4eb28-16e5-4156-a438-6333b006728b" containerName="registry-server" probeResult="failure" output=< Oct 14 13:38:27 crc kubenswrapper[4725]: timeout: failed to connect service ":50051" within 1s Oct 14 13:38:27 crc kubenswrapper[4725]: > Oct 14 13:38:27 crc kubenswrapper[4725]: I1014 13:38:27.936636 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da8f45ef-2dbd-42b4-ab32-41fdbc941304" path="/var/lib/kubelet/pods/da8f45ef-2dbd-42b4-ab32-41fdbc941304/volumes" Oct 14 13:38:36 crc kubenswrapper[4725]: I1014 13:38:36.811166 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qvtws" Oct 14 13:38:36 crc kubenswrapper[4725]: I1014 13:38:36.895582 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qvtws" Oct 14 13:38:37 crc kubenswrapper[4725]: I1014 13:38:37.069860 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qvtws"] Oct 14 13:38:38 crc kubenswrapper[4725]: I1014 13:38:38.164582 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qvtws" podUID="3df4eb28-16e5-4156-a438-6333b006728b" containerName="registry-server" containerID="cri-o://304451b0efeb2dbb0afe63cf94afe4aec559e0c24498449eb779192ef9d082bb" gracePeriod=2 Oct 14 13:38:38 crc kubenswrapper[4725]: I1014 13:38:38.677876 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvtws" Oct 14 13:38:38 crc kubenswrapper[4725]: I1014 13:38:38.810400 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3df4eb28-16e5-4156-a438-6333b006728b-utilities\") pod \"3df4eb28-16e5-4156-a438-6333b006728b\" (UID: \"3df4eb28-16e5-4156-a438-6333b006728b\") " Oct 14 13:38:38 crc kubenswrapper[4725]: I1014 13:38:38.810530 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3df4eb28-16e5-4156-a438-6333b006728b-catalog-content\") pod \"3df4eb28-16e5-4156-a438-6333b006728b\" (UID: \"3df4eb28-16e5-4156-a438-6333b006728b\") " Oct 14 13:38:38 crc kubenswrapper[4725]: I1014 13:38:38.810806 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkvjs\" (UniqueName: \"kubernetes.io/projected/3df4eb28-16e5-4156-a438-6333b006728b-kube-api-access-vkvjs\") pod \"3df4eb28-16e5-4156-a438-6333b006728b\" (UID: \"3df4eb28-16e5-4156-a438-6333b006728b\") " Oct 14 13:38:38 crc kubenswrapper[4725]: I1014 13:38:38.812302 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3df4eb28-16e5-4156-a438-6333b006728b-utilities" (OuterVolumeSpecName: "utilities") pod "3df4eb28-16e5-4156-a438-6333b006728b" (UID: "3df4eb28-16e5-4156-a438-6333b006728b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:38:38 crc kubenswrapper[4725]: I1014 13:38:38.821835 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3df4eb28-16e5-4156-a438-6333b006728b-kube-api-access-vkvjs" (OuterVolumeSpecName: "kube-api-access-vkvjs") pod "3df4eb28-16e5-4156-a438-6333b006728b" (UID: "3df4eb28-16e5-4156-a438-6333b006728b"). InnerVolumeSpecName "kube-api-access-vkvjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:38:38 crc kubenswrapper[4725]: I1014 13:38:38.913801 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3df4eb28-16e5-4156-a438-6333b006728b-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 13:38:38 crc kubenswrapper[4725]: I1014 13:38:38.913861 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkvjs\" (UniqueName: \"kubernetes.io/projected/3df4eb28-16e5-4156-a438-6333b006728b-kube-api-access-vkvjs\") on node \"crc\" DevicePath \"\"" Oct 14 13:38:38 crc kubenswrapper[4725]: I1014 13:38:38.949261 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3df4eb28-16e5-4156-a438-6333b006728b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3df4eb28-16e5-4156-a438-6333b006728b" (UID: "3df4eb28-16e5-4156-a438-6333b006728b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:38:39 crc kubenswrapper[4725]: I1014 13:38:39.016301 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3df4eb28-16e5-4156-a438-6333b006728b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 13:38:39 crc kubenswrapper[4725]: I1014 13:38:39.179714 4725 generic.go:334] "Generic (PLEG): container finished" podID="3df4eb28-16e5-4156-a438-6333b006728b" containerID="304451b0efeb2dbb0afe63cf94afe4aec559e0c24498449eb779192ef9d082bb" exitCode=0 Oct 14 13:38:39 crc kubenswrapper[4725]: I1014 13:38:39.179756 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvtws" event={"ID":"3df4eb28-16e5-4156-a438-6333b006728b","Type":"ContainerDied","Data":"304451b0efeb2dbb0afe63cf94afe4aec559e0c24498449eb779192ef9d082bb"} Oct 14 13:38:39 crc kubenswrapper[4725]: I1014 13:38:39.179790 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qvtws" event={"ID":"3df4eb28-16e5-4156-a438-6333b006728b","Type":"ContainerDied","Data":"4c8054091a3038053369f05f2eca96f47352df7f0a3d687e3cec78c6287f8ad8"} Oct 14 13:38:39 crc kubenswrapper[4725]: I1014 13:38:39.179811 4725 scope.go:117] "RemoveContainer" containerID="304451b0efeb2dbb0afe63cf94afe4aec559e0c24498449eb779192ef9d082bb" Oct 14 13:38:39 crc kubenswrapper[4725]: I1014 13:38:39.179858 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qvtws" Oct 14 13:38:39 crc kubenswrapper[4725]: I1014 13:38:39.220632 4725 scope.go:117] "RemoveContainer" containerID="654947e77e31fec0db42c6a2a99909f01bf3aff7266c5c1d6be5d4bcae0fbed3" Oct 14 13:38:39 crc kubenswrapper[4725]: I1014 13:38:39.221441 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qvtws"] Oct 14 13:38:39 crc kubenswrapper[4725]: I1014 13:38:39.248741 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qvtws"] Oct 14 13:38:39 crc kubenswrapper[4725]: I1014 13:38:39.264415 4725 scope.go:117] "RemoveContainer" containerID="0a817572fff7c8c702cced5a21897841f16a195e27c873c8b93095bd5a983c0b" Oct 14 13:38:39 crc kubenswrapper[4725]: I1014 13:38:39.304256 4725 scope.go:117] "RemoveContainer" containerID="304451b0efeb2dbb0afe63cf94afe4aec559e0c24498449eb779192ef9d082bb" Oct 14 13:38:39 crc kubenswrapper[4725]: E1014 13:38:39.305094 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"304451b0efeb2dbb0afe63cf94afe4aec559e0c24498449eb779192ef9d082bb\": container with ID starting with 304451b0efeb2dbb0afe63cf94afe4aec559e0c24498449eb779192ef9d082bb not found: ID does not exist" containerID="304451b0efeb2dbb0afe63cf94afe4aec559e0c24498449eb779192ef9d082bb" Oct 14 13:38:39 crc kubenswrapper[4725]: I1014 13:38:39.305272 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"304451b0efeb2dbb0afe63cf94afe4aec559e0c24498449eb779192ef9d082bb"} err="failed to get container status \"304451b0efeb2dbb0afe63cf94afe4aec559e0c24498449eb779192ef9d082bb\": rpc error: code = NotFound desc = could not find container \"304451b0efeb2dbb0afe63cf94afe4aec559e0c24498449eb779192ef9d082bb\": container with ID starting with 304451b0efeb2dbb0afe63cf94afe4aec559e0c24498449eb779192ef9d082bb not found: ID does not exist" Oct 14 13:38:39 crc kubenswrapper[4725]: I1014 13:38:39.305368 4725 scope.go:117] "RemoveContainer" containerID="654947e77e31fec0db42c6a2a99909f01bf3aff7266c5c1d6be5d4bcae0fbed3" Oct 14 13:38:39 crc kubenswrapper[4725]: E1014 13:38:39.306522 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"654947e77e31fec0db42c6a2a99909f01bf3aff7266c5c1d6be5d4bcae0fbed3\": container with ID starting with 654947e77e31fec0db42c6a2a99909f01bf3aff7266c5c1d6be5d4bcae0fbed3 not found: ID does not exist" containerID="654947e77e31fec0db42c6a2a99909f01bf3aff7266c5c1d6be5d4bcae0fbed3" Oct 14 13:38:39 crc kubenswrapper[4725]: I1014 13:38:39.306719 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"654947e77e31fec0db42c6a2a99909f01bf3aff7266c5c1d6be5d4bcae0fbed3"} err="failed to get container status \"654947e77e31fec0db42c6a2a99909f01bf3aff7266c5c1d6be5d4bcae0fbed3\": rpc error: code = NotFound desc = could not find container \"654947e77e31fec0db42c6a2a99909f01bf3aff7266c5c1d6be5d4bcae0fbed3\": container with ID starting with 654947e77e31fec0db42c6a2a99909f01bf3aff7266c5c1d6be5d4bcae0fbed3 not found: ID does not exist" Oct 14 13:38:39 crc kubenswrapper[4725]: I1014 13:38:39.306858 4725 scope.go:117] "RemoveContainer" containerID="0a817572fff7c8c702cced5a21897841f16a195e27c873c8b93095bd5a983c0b" Oct 14 13:38:39 crc kubenswrapper[4725]: E1014 13:38:39.307533 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a817572fff7c8c702cced5a21897841f16a195e27c873c8b93095bd5a983c0b\": container with ID starting with 0a817572fff7c8c702cced5a21897841f16a195e27c873c8b93095bd5a983c0b not found: ID does not exist" containerID="0a817572fff7c8c702cced5a21897841f16a195e27c873c8b93095bd5a983c0b" Oct 14 13:38:39 crc kubenswrapper[4725]: I1014 13:38:39.307633 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a817572fff7c8c702cced5a21897841f16a195e27c873c8b93095bd5a983c0b"} err="failed to get container status \"0a817572fff7c8c702cced5a21897841f16a195e27c873c8b93095bd5a983c0b\": rpc error: code = NotFound desc = could not find container \"0a817572fff7c8c702cced5a21897841f16a195e27c873c8b93095bd5a983c0b\": container with ID starting with 0a817572fff7c8c702cced5a21897841f16a195e27c873c8b93095bd5a983c0b not found: ID does not exist" Oct 14 13:38:39 crc kubenswrapper[4725]: I1014 13:38:39.937493 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3df4eb28-16e5-4156-a438-6333b006728b" path="/var/lib/kubelet/pods/3df4eb28-16e5-4156-a438-6333b006728b/volumes" Oct 14 13:38:44 crc kubenswrapper[4725]: I1014 13:38:44.900369 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-76cct"] Oct 14 13:38:44 crc kubenswrapper[4725]: E1014 13:38:44.901343 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df4eb28-16e5-4156-a438-6333b006728b" containerName="extract-content" Oct 14 13:38:44 crc kubenswrapper[4725]: I1014 13:38:44.901436 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df4eb28-16e5-4156-a438-6333b006728b" containerName="extract-content" Oct 14 13:38:44 crc kubenswrapper[4725]: E1014 13:38:44.901447 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da8f45ef-2dbd-42b4-ab32-41fdbc941304" containerName="extract-utilities" Oct 14 13:38:44 crc kubenswrapper[4725]: I1014 13:38:44.901474 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="da8f45ef-2dbd-42b4-ab32-41fdbc941304" containerName="extract-utilities" Oct 14 13:38:44 crc kubenswrapper[4725]: E1014 13:38:44.901489 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df4eb28-16e5-4156-a438-6333b006728b" containerName="extract-utilities" Oct 14 13:38:44 crc kubenswrapper[4725]: I1014 13:38:44.901497 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df4eb28-16e5-4156-a438-6333b006728b" containerName="extract-utilities" Oct 14 13:38:44 crc kubenswrapper[4725]: E1014 13:38:44.901521 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3df4eb28-16e5-4156-a438-6333b006728b" containerName="registry-server" Oct 14 13:38:44 crc kubenswrapper[4725]: I1014 13:38:44.901528 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3df4eb28-16e5-4156-a438-6333b006728b" containerName="registry-server" Oct 14 13:38:44 crc kubenswrapper[4725]: E1014 13:38:44.901539 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da8f45ef-2dbd-42b4-ab32-41fdbc941304" containerName="extract-content" Oct 14 13:38:44 crc kubenswrapper[4725]: I1014 13:38:44.901546 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="da8f45ef-2dbd-42b4-ab32-41fdbc941304" containerName="extract-content" Oct 14 13:38:44 crc kubenswrapper[4725]: E1014 13:38:44.901569 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da8f45ef-2dbd-42b4-ab32-41fdbc941304" containerName="registry-server" Oct 14 13:38:44 crc kubenswrapper[4725]: I1014 13:38:44.901577 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="da8f45ef-2dbd-42b4-ab32-41fdbc941304" containerName="registry-server" Oct 14 13:38:44 crc kubenswrapper[4725]: I1014 13:38:44.901789 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="da8f45ef-2dbd-42b4-ab32-41fdbc941304" containerName="registry-server" Oct 14 13:38:44 crc kubenswrapper[4725]: I1014 13:38:44.901823 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="3df4eb28-16e5-4156-a438-6333b006728b" containerName="registry-server" Oct 14 13:38:44 crc kubenswrapper[4725]: I1014 13:38:44.903382 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-76cct" Oct 14 13:38:44 crc kubenswrapper[4725]: I1014 13:38:44.915817 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-76cct"] Oct 14 13:38:45 crc kubenswrapper[4725]: I1014 13:38:45.043930 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8755ee01-3d89-4936-9c92-105a1dbec414-utilities\") pod \"community-operators-76cct\" (UID: \"8755ee01-3d89-4936-9c92-105a1dbec414\") " pod="openshift-marketplace/community-operators-76cct" Oct 14 13:38:45 crc kubenswrapper[4725]: I1014 13:38:45.043973 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8755ee01-3d89-4936-9c92-105a1dbec414-catalog-content\") pod \"community-operators-76cct\" (UID: \"8755ee01-3d89-4936-9c92-105a1dbec414\") " pod="openshift-marketplace/community-operators-76cct" Oct 14 13:38:45 crc kubenswrapper[4725]: I1014 13:38:45.044062 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s9ts\" (UniqueName: \"kubernetes.io/projected/8755ee01-3d89-4936-9c92-105a1dbec414-kube-api-access-2s9ts\") pod \"community-operators-76cct\" (UID: \"8755ee01-3d89-4936-9c92-105a1dbec414\") " pod="openshift-marketplace/community-operators-76cct" Oct 14 13:38:45 crc kubenswrapper[4725]: I1014 13:38:45.146763 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8755ee01-3d89-4936-9c92-105a1dbec414-utilities\") pod \"community-operators-76cct\" (UID: \"8755ee01-3d89-4936-9c92-105a1dbec414\") " pod="openshift-marketplace/community-operators-76cct" Oct 14 13:38:45 crc kubenswrapper[4725]: I1014 13:38:45.146808 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8755ee01-3d89-4936-9c92-105a1dbec414-catalog-content\") pod \"community-operators-76cct\" (UID: \"8755ee01-3d89-4936-9c92-105a1dbec414\") " pod="openshift-marketplace/community-operators-76cct" Oct 14 13:38:45 crc kubenswrapper[4725]: I1014 13:38:45.146850 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s9ts\" (UniqueName: \"kubernetes.io/projected/8755ee01-3d89-4936-9c92-105a1dbec414-kube-api-access-2s9ts\") pod \"community-operators-76cct\" (UID: \"8755ee01-3d89-4936-9c92-105a1dbec414\") " pod="openshift-marketplace/community-operators-76cct" Oct 14 13:38:45 crc kubenswrapper[4725]: I1014 13:38:45.147269 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8755ee01-3d89-4936-9c92-105a1dbec414-utilities\") pod \"community-operators-76cct\" (UID: \"8755ee01-3d89-4936-9c92-105a1dbec414\") " pod="openshift-marketplace/community-operators-76cct" Oct 14 13:38:45 crc kubenswrapper[4725]: I1014 13:38:45.147363 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8755ee01-3d89-4936-9c92-105a1dbec414-catalog-content\") pod \"community-operators-76cct\" (UID: \"8755ee01-3d89-4936-9c92-105a1dbec414\") " pod="openshift-marketplace/community-operators-76cct" Oct 14 13:38:45 crc kubenswrapper[4725]: I1014 13:38:45.172413 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s9ts\" (UniqueName: \"kubernetes.io/projected/8755ee01-3d89-4936-9c92-105a1dbec414-kube-api-access-2s9ts\") pod \"community-operators-76cct\" (UID: \"8755ee01-3d89-4936-9c92-105a1dbec414\") " pod="openshift-marketplace/community-operators-76cct" Oct 14 13:38:45 crc kubenswrapper[4725]: I1014 13:38:45.222355 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-76cct" Oct 14 13:38:45 crc kubenswrapper[4725]: I1014 13:38:45.732549 4725 scope.go:117] "RemoveContainer" containerID="0cd106722729bf4661ebb248e7c37a7f4c17d35033d56e8bcbd1a0d93589649a" Oct 14 13:38:45 crc kubenswrapper[4725]: I1014 13:38:45.760334 4725 scope.go:117] "RemoveContainer" containerID="a5d1b40ce1d3e6b000dc8866622c16730ff977dee3bfbf640f75c48553e2600b" Oct 14 13:38:45 crc kubenswrapper[4725]: I1014 13:38:45.777357 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-76cct"] Oct 14 13:38:46 crc kubenswrapper[4725]: I1014 13:38:46.271794 4725 generic.go:334] "Generic (PLEG): container finished" podID="8755ee01-3d89-4936-9c92-105a1dbec414" containerID="31e36c4db772ef08653336561ec019a70308530570342fbb78d616595cffc148" exitCode=0 Oct 14 13:38:46 crc kubenswrapper[4725]: I1014 13:38:46.271845 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-76cct" event={"ID":"8755ee01-3d89-4936-9c92-105a1dbec414","Type":"ContainerDied","Data":"31e36c4db772ef08653336561ec019a70308530570342fbb78d616595cffc148"} Oct 14 13:38:46 crc kubenswrapper[4725]: I1014 13:38:46.271894 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-76cct" event={"ID":"8755ee01-3d89-4936-9c92-105a1dbec414","Type":"ContainerStarted","Data":"33ba012fcfc8045e480d6870ea9453b253733090334b304e286a1552028c1134"} Oct 14 13:38:48 crc kubenswrapper[4725]: I1014 13:38:48.313173 4725 generic.go:334] "Generic (PLEG): container finished" podID="8755ee01-3d89-4936-9c92-105a1dbec414" containerID="2035b87c3c566d7f5e15649e236e5e2b3bc0f159f4d6f3536fe90918b2f95e9f" exitCode=0 Oct 14 13:38:48 crc kubenswrapper[4725]: I1014 13:38:48.313230 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-76cct" event={"ID":"8755ee01-3d89-4936-9c92-105a1dbec414","Type":"ContainerDied","Data":"2035b87c3c566d7f5e15649e236e5e2b3bc0f159f4d6f3536fe90918b2f95e9f"} Oct 14 13:38:49 crc kubenswrapper[4725]: I1014 13:38:49.324203 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-76cct" event={"ID":"8755ee01-3d89-4936-9c92-105a1dbec414","Type":"ContainerStarted","Data":"f764d1c4c61b7b3289b28a426671f057aea818e90e03fb8d7af92c3ae7dee4b3"} Oct 14 13:38:49 crc kubenswrapper[4725]: I1014 13:38:49.340828 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-76cct" podStartSLOduration=2.8701645669999998 podStartE2EDuration="5.340805743s" podCreationTimestamp="2025-10-14 13:38:44 +0000 UTC" firstStartedPulling="2025-10-14 13:38:46.273717594 +0000 UTC m=+1443.122152423" lastFinishedPulling="2025-10-14 13:38:48.74435877 +0000 UTC m=+1445.592793599" observedRunningTime="2025-10-14 13:38:49.339409495 +0000 UTC m=+1446.187844334" watchObservedRunningTime="2025-10-14 13:38:49.340805743 +0000 UTC m=+1446.189240562" Oct 14 13:38:55 crc kubenswrapper[4725]: I1014 13:38:55.222507 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-76cct" Oct 14 13:38:55 crc kubenswrapper[4725]: I1014 13:38:55.222980 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-76cct" Oct 14 13:38:55 crc kubenswrapper[4725]: I1014 13:38:55.279137 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-76cct" Oct 14 13:38:55 crc kubenswrapper[4725]: I1014 13:38:55.441741 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-76cct" Oct 14 13:38:55 crc kubenswrapper[4725]: I1014 13:38:55.529490 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-76cct"] Oct 14 13:38:57 crc kubenswrapper[4725]: I1014 13:38:57.401723 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-76cct" podUID="8755ee01-3d89-4936-9c92-105a1dbec414" containerName="registry-server" containerID="cri-o://f764d1c4c61b7b3289b28a426671f057aea818e90e03fb8d7af92c3ae7dee4b3" gracePeriod=2 Oct 14 13:38:57 crc kubenswrapper[4725]: I1014 13:38:57.918304 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-76cct" Oct 14 13:38:57 crc kubenswrapper[4725]: I1014 13:38:57.997207 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2s9ts\" (UniqueName: \"kubernetes.io/projected/8755ee01-3d89-4936-9c92-105a1dbec414-kube-api-access-2s9ts\") pod \"8755ee01-3d89-4936-9c92-105a1dbec414\" (UID: \"8755ee01-3d89-4936-9c92-105a1dbec414\") " Oct 14 13:38:57 crc kubenswrapper[4725]: I1014 13:38:57.997265 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8755ee01-3d89-4936-9c92-105a1dbec414-utilities\") pod \"8755ee01-3d89-4936-9c92-105a1dbec414\" (UID: \"8755ee01-3d89-4936-9c92-105a1dbec414\") " Oct 14 13:38:57 crc kubenswrapper[4725]: I1014 13:38:57.997519 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8755ee01-3d89-4936-9c92-105a1dbec414-catalog-content\") pod \"8755ee01-3d89-4936-9c92-105a1dbec414\" (UID: \"8755ee01-3d89-4936-9c92-105a1dbec414\") " Oct 14 13:38:57 crc kubenswrapper[4725]: I1014 13:38:57.998273 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8755ee01-3d89-4936-9c92-105a1dbec414-utilities" (OuterVolumeSpecName: "utilities") pod "8755ee01-3d89-4936-9c92-105a1dbec414" (UID: "8755ee01-3d89-4936-9c92-105a1dbec414"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:38:58 crc kubenswrapper[4725]: I1014 13:38:58.002126 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8755ee01-3d89-4936-9c92-105a1dbec414-kube-api-access-2s9ts" (OuterVolumeSpecName: "kube-api-access-2s9ts") pod "8755ee01-3d89-4936-9c92-105a1dbec414" (UID: "8755ee01-3d89-4936-9c92-105a1dbec414"). InnerVolumeSpecName "kube-api-access-2s9ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:38:58 crc kubenswrapper[4725]: I1014 13:38:58.100191 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2s9ts\" (UniqueName: \"kubernetes.io/projected/8755ee01-3d89-4936-9c92-105a1dbec414-kube-api-access-2s9ts\") on node \"crc\" DevicePath \"\"" Oct 14 13:38:58 crc kubenswrapper[4725]: I1014 13:38:58.100224 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8755ee01-3d89-4936-9c92-105a1dbec414-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 13:38:58 crc kubenswrapper[4725]: I1014 13:38:58.315180 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8755ee01-3d89-4936-9c92-105a1dbec414-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8755ee01-3d89-4936-9c92-105a1dbec414" (UID: "8755ee01-3d89-4936-9c92-105a1dbec414"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:38:58 crc kubenswrapper[4725]: I1014 13:38:58.407762 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8755ee01-3d89-4936-9c92-105a1dbec414-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 13:38:58 crc kubenswrapper[4725]: I1014 13:38:58.420190 4725 generic.go:334] "Generic (PLEG): container finished" podID="8755ee01-3d89-4936-9c92-105a1dbec414" containerID="f764d1c4c61b7b3289b28a426671f057aea818e90e03fb8d7af92c3ae7dee4b3" exitCode=0 Oct 14 13:38:58 crc kubenswrapper[4725]: I1014 13:38:58.420249 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-76cct" Oct 14 13:38:58 crc kubenswrapper[4725]: I1014 13:38:58.420276 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-76cct" event={"ID":"8755ee01-3d89-4936-9c92-105a1dbec414","Type":"ContainerDied","Data":"f764d1c4c61b7b3289b28a426671f057aea818e90e03fb8d7af92c3ae7dee4b3"} Oct 14 13:38:58 crc kubenswrapper[4725]: I1014 13:38:58.420344 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-76cct" event={"ID":"8755ee01-3d89-4936-9c92-105a1dbec414","Type":"ContainerDied","Data":"33ba012fcfc8045e480d6870ea9453b253733090334b304e286a1552028c1134"} Oct 14 13:38:58 crc kubenswrapper[4725]: I1014 13:38:58.420366 4725 scope.go:117] "RemoveContainer" containerID="f764d1c4c61b7b3289b28a426671f057aea818e90e03fb8d7af92c3ae7dee4b3" Oct 14 13:38:58 crc kubenswrapper[4725]: I1014 13:38:58.455012 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-76cct"] Oct 14 13:38:58 crc kubenswrapper[4725]: I1014 13:38:58.464861 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-76cct"] Oct 14 13:38:58 crc kubenswrapper[4725]: I1014 13:38:58.465376 4725 scope.go:117] "RemoveContainer" containerID="2035b87c3c566d7f5e15649e236e5e2b3bc0f159f4d6f3536fe90918b2f95e9f" Oct 14 13:38:58 crc kubenswrapper[4725]: I1014 13:38:58.488195 4725 scope.go:117] "RemoveContainer" containerID="31e36c4db772ef08653336561ec019a70308530570342fbb78d616595cffc148" Oct 14 13:38:58 crc kubenswrapper[4725]: I1014 13:38:58.534036 4725 scope.go:117] "RemoveContainer" containerID="f764d1c4c61b7b3289b28a426671f057aea818e90e03fb8d7af92c3ae7dee4b3" Oct 14 13:38:58 crc kubenswrapper[4725]: E1014 13:38:58.534557 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f764d1c4c61b7b3289b28a426671f057aea818e90e03fb8d7af92c3ae7dee4b3\": container with ID starting with f764d1c4c61b7b3289b28a426671f057aea818e90e03fb8d7af92c3ae7dee4b3 not found: ID does not exist" containerID="f764d1c4c61b7b3289b28a426671f057aea818e90e03fb8d7af92c3ae7dee4b3" Oct 14 13:38:58 crc kubenswrapper[4725]: I1014 13:38:58.534609 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f764d1c4c61b7b3289b28a426671f057aea818e90e03fb8d7af92c3ae7dee4b3"} err="failed to get container status \"f764d1c4c61b7b3289b28a426671f057aea818e90e03fb8d7af92c3ae7dee4b3\": rpc error: code = NotFound desc = could not find container \"f764d1c4c61b7b3289b28a426671f057aea818e90e03fb8d7af92c3ae7dee4b3\": container with ID starting with f764d1c4c61b7b3289b28a426671f057aea818e90e03fb8d7af92c3ae7dee4b3 not found: ID does not exist" Oct 14 13:38:58 crc kubenswrapper[4725]: I1014 13:38:58.534642 4725 scope.go:117] "RemoveContainer" containerID="2035b87c3c566d7f5e15649e236e5e2b3bc0f159f4d6f3536fe90918b2f95e9f" Oct 14 13:38:58 crc kubenswrapper[4725]: E1014 13:38:58.534941 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2035b87c3c566d7f5e15649e236e5e2b3bc0f159f4d6f3536fe90918b2f95e9f\": container with ID starting with 2035b87c3c566d7f5e15649e236e5e2b3bc0f159f4d6f3536fe90918b2f95e9f not found: ID does not exist" containerID="2035b87c3c566d7f5e15649e236e5e2b3bc0f159f4d6f3536fe90918b2f95e9f" Oct 14 13:38:58 crc kubenswrapper[4725]: I1014 13:38:58.534966 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2035b87c3c566d7f5e15649e236e5e2b3bc0f159f4d6f3536fe90918b2f95e9f"} err="failed to get container status \"2035b87c3c566d7f5e15649e236e5e2b3bc0f159f4d6f3536fe90918b2f95e9f\": rpc error: code = NotFound desc = could not find container \"2035b87c3c566d7f5e15649e236e5e2b3bc0f159f4d6f3536fe90918b2f95e9f\": container with ID starting with 2035b87c3c566d7f5e15649e236e5e2b3bc0f159f4d6f3536fe90918b2f95e9f not found: ID does not exist" Oct 14 13:38:58 crc kubenswrapper[4725]: I1014 13:38:58.534988 4725 scope.go:117] "RemoveContainer" containerID="31e36c4db772ef08653336561ec019a70308530570342fbb78d616595cffc148" Oct 14 13:38:58 crc kubenswrapper[4725]: E1014 13:38:58.535380 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31e36c4db772ef08653336561ec019a70308530570342fbb78d616595cffc148\": container with ID starting with 31e36c4db772ef08653336561ec019a70308530570342fbb78d616595cffc148 not found: ID does not exist" containerID="31e36c4db772ef08653336561ec019a70308530570342fbb78d616595cffc148" Oct 14 13:38:58 crc kubenswrapper[4725]: I1014 13:38:58.535423 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31e36c4db772ef08653336561ec019a70308530570342fbb78d616595cffc148"} err="failed to get container status \"31e36c4db772ef08653336561ec019a70308530570342fbb78d616595cffc148\": rpc error: code = NotFound desc = could not find container \"31e36c4db772ef08653336561ec019a70308530570342fbb78d616595cffc148\": container with ID starting with 31e36c4db772ef08653336561ec019a70308530570342fbb78d616595cffc148 not found: ID does not exist" Oct 14 13:38:59 crc kubenswrapper[4725]: I1014 13:38:59.932935 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8755ee01-3d89-4936-9c92-105a1dbec414" path="/var/lib/kubelet/pods/8755ee01-3d89-4936-9c92-105a1dbec414/volumes" Oct 14 13:39:32 crc kubenswrapper[4725]: I1014 13:39:32.521101 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 13:39:32 crc kubenswrapper[4725]: I1014 13:39:32.521838 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 13:39:45 crc kubenswrapper[4725]: I1014 13:39:45.935958 4725 scope.go:117] "RemoveContainer" containerID="c6323b32387e1acd6f4cd5e2ba5525eabb7b455620c71b61fe9db1a5c9a82f6f" Oct 14 13:39:45 crc kubenswrapper[4725]: I1014 13:39:45.960285 4725 scope.go:117] "RemoveContainer" containerID="79c016f0d4798ff8e63082ab743f0b7872522ecc9d2b4c6587ef211104989327" Oct 14 13:39:45 crc kubenswrapper[4725]: I1014 13:39:45.985779 4725 scope.go:117] "RemoveContainer" containerID="2c506944ff567a8fc9b237bf9cd9ce0b04202da23b03c2481c60163839e5ba3d" Oct 14 13:40:02 crc kubenswrapper[4725]: I1014 13:40:02.520885 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 13:40:02 crc kubenswrapper[4725]: I1014 13:40:02.521504 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 13:40:32 crc kubenswrapper[4725]: I1014 13:40:32.520235 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 13:40:32 crc kubenswrapper[4725]: I1014 13:40:32.520800 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 13:40:32 crc kubenswrapper[4725]: I1014 13:40:32.520849 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" Oct 14 13:40:32 crc kubenswrapper[4725]: I1014 13:40:32.521560 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f767896ac68228fe09840fae7ddd8f8c1d269f215009c642a1baacf3a7849d4"} pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 13:40:32 crc kubenswrapper[4725]: I1014 13:40:32.521652 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" containerID="cri-o://6f767896ac68228fe09840fae7ddd8f8c1d269f215009c642a1baacf3a7849d4" gracePeriod=600 Oct 14 13:40:32 crc kubenswrapper[4725]: E1014 13:40:32.648166 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:40:33 crc kubenswrapper[4725]: I1014 13:40:33.409276 4725 generic.go:334] "Generic (PLEG): container finished" podID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerID="6f767896ac68228fe09840fae7ddd8f8c1d269f215009c642a1baacf3a7849d4" exitCode=0 Oct 14 13:40:33 crc kubenswrapper[4725]: I1014 13:40:33.409328 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" event={"ID":"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c","Type":"ContainerDied","Data":"6f767896ac68228fe09840fae7ddd8f8c1d269f215009c642a1baacf3a7849d4"} Oct 14 13:40:33 crc kubenswrapper[4725]: I1014 13:40:33.409371 4725 scope.go:117] "RemoveContainer" containerID="2ed28316bb88cfbc90574ba8c972086f633bc4c19d223765ba69ed10f863412c" Oct 14 13:40:33 crc kubenswrapper[4725]: I1014 13:40:33.411206 4725 scope.go:117] "RemoveContainer" containerID="6f767896ac68228fe09840fae7ddd8f8c1d269f215009c642a1baacf3a7849d4" Oct 14 13:40:33 crc kubenswrapper[4725]: E1014 13:40:33.412116 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:40:34 crc kubenswrapper[4725]: I1014 13:40:34.425461 4725 generic.go:334] "Generic (PLEG): container finished" podID="7fb7769c-8d15-4925-8f3a-3f8e5a81fd72" containerID="c8c999465237885d0a75925c58888ede5aa8ac801591b23d4c2075f674e74a39" exitCode=0 Oct 14 13:40:34 crc kubenswrapper[4725]: I1014 13:40:34.425498 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8" event={"ID":"7fb7769c-8d15-4925-8f3a-3f8e5a81fd72","Type":"ContainerDied","Data":"c8c999465237885d0a75925c58888ede5aa8ac801591b23d4c2075f674e74a39"} Oct 14 13:40:35 crc kubenswrapper[4725]: I1014 13:40:35.857131 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8" Oct 14 13:40:35 crc kubenswrapper[4725]: I1014 13:40:35.958056 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lkvd\" (UniqueName: \"kubernetes.io/projected/7fb7769c-8d15-4925-8f3a-3f8e5a81fd72-kube-api-access-5lkvd\") pod \"7fb7769c-8d15-4925-8f3a-3f8e5a81fd72\" (UID: \"7fb7769c-8d15-4925-8f3a-3f8e5a81fd72\") " Oct 14 13:40:35 crc kubenswrapper[4725]: I1014 13:40:35.958268 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fb7769c-8d15-4925-8f3a-3f8e5a81fd72-inventory\") pod \"7fb7769c-8d15-4925-8f3a-3f8e5a81fd72\" (UID: \"7fb7769c-8d15-4925-8f3a-3f8e5a81fd72\") " Oct 14 13:40:35 crc kubenswrapper[4725]: I1014 13:40:35.958395 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fb7769c-8d15-4925-8f3a-3f8e5a81fd72-bootstrap-combined-ca-bundle\") pod \"7fb7769c-8d15-4925-8f3a-3f8e5a81fd72\" (UID: \"7fb7769c-8d15-4925-8f3a-3f8e5a81fd72\") " Oct 14 13:40:35 crc kubenswrapper[4725]: I1014 13:40:35.958732 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7fb7769c-8d15-4925-8f3a-3f8e5a81fd72-ssh-key\") pod \"7fb7769c-8d15-4925-8f3a-3f8e5a81fd72\" (UID: \"7fb7769c-8d15-4925-8f3a-3f8e5a81fd72\") " Oct 14 13:40:35 crc kubenswrapper[4725]: I1014 13:40:35.964700 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fb7769c-8d15-4925-8f3a-3f8e5a81fd72-kube-api-access-5lkvd" (OuterVolumeSpecName: "kube-api-access-5lkvd") pod "7fb7769c-8d15-4925-8f3a-3f8e5a81fd72" (UID: "7fb7769c-8d15-4925-8f3a-3f8e5a81fd72"). InnerVolumeSpecName "kube-api-access-5lkvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:40:35 crc kubenswrapper[4725]: I1014 13:40:35.972548 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fb7769c-8d15-4925-8f3a-3f8e5a81fd72-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "7fb7769c-8d15-4925-8f3a-3f8e5a81fd72" (UID: "7fb7769c-8d15-4925-8f3a-3f8e5a81fd72"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:40:35 crc kubenswrapper[4725]: I1014 13:40:35.994285 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fb7769c-8d15-4925-8f3a-3f8e5a81fd72-inventory" (OuterVolumeSpecName: "inventory") pod "7fb7769c-8d15-4925-8f3a-3f8e5a81fd72" (UID: "7fb7769c-8d15-4925-8f3a-3f8e5a81fd72"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:40:35 crc kubenswrapper[4725]: I1014 13:40:35.994636 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fb7769c-8d15-4925-8f3a-3f8e5a81fd72-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7fb7769c-8d15-4925-8f3a-3f8e5a81fd72" (UID: "7fb7769c-8d15-4925-8f3a-3f8e5a81fd72"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:40:36 crc kubenswrapper[4725]: I1014 13:40:36.061136 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fb7769c-8d15-4925-8f3a-3f8e5a81fd72-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 13:40:36 crc kubenswrapper[4725]: I1014 13:40:36.061198 4725 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fb7769c-8d15-4925-8f3a-3f8e5a81fd72-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:40:36 crc kubenswrapper[4725]: I1014 13:40:36.061212 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7fb7769c-8d15-4925-8f3a-3f8e5a81fd72-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 13:40:36 crc kubenswrapper[4725]: I1014 13:40:36.061220 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lkvd\" (UniqueName: \"kubernetes.io/projected/7fb7769c-8d15-4925-8f3a-3f8e5a81fd72-kube-api-access-5lkvd\") on node \"crc\" DevicePath \"\"" Oct 14 13:40:36 crc kubenswrapper[4725]: I1014 13:40:36.452291 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8" event={"ID":"7fb7769c-8d15-4925-8f3a-3f8e5a81fd72","Type":"ContainerDied","Data":"f3fcb7c13e9ed5fd3cde7007d6305ef02f7ccbe4c677c6cd70f6ec7db2db6ce7"} Oct 14 13:40:36 crc kubenswrapper[4725]: I1014 13:40:36.452348 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3fcb7c13e9ed5fd3cde7007d6305ef02f7ccbe4c677c6cd70f6ec7db2db6ce7" Oct 14 13:40:36 crc kubenswrapper[4725]: I1014 13:40:36.452411 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8" Oct 14 13:40:36 crc kubenswrapper[4725]: I1014 13:40:36.546945 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-frd7h"] Oct 14 13:40:36 crc kubenswrapper[4725]: E1014 13:40:36.547556 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8755ee01-3d89-4936-9c92-105a1dbec414" containerName="extract-utilities" Oct 14 13:40:36 crc kubenswrapper[4725]: I1014 13:40:36.547578 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="8755ee01-3d89-4936-9c92-105a1dbec414" containerName="extract-utilities" Oct 14 13:40:36 crc kubenswrapper[4725]: E1014 13:40:36.547596 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fb7769c-8d15-4925-8f3a-3f8e5a81fd72" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 14 13:40:36 crc kubenswrapper[4725]: I1014 13:40:36.547605 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fb7769c-8d15-4925-8f3a-3f8e5a81fd72" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 14 13:40:36 crc kubenswrapper[4725]: E1014 13:40:36.547632 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8755ee01-3d89-4936-9c92-105a1dbec414" containerName="extract-content" Oct 14 13:40:36 crc kubenswrapper[4725]: I1014 13:40:36.547640 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="8755ee01-3d89-4936-9c92-105a1dbec414" containerName="extract-content" Oct 14 13:40:36 crc kubenswrapper[4725]: E1014 13:40:36.547655 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8755ee01-3d89-4936-9c92-105a1dbec414" containerName="registry-server" Oct 14 13:40:36 crc kubenswrapper[4725]: I1014 13:40:36.547663 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="8755ee01-3d89-4936-9c92-105a1dbec414" containerName="registry-server" Oct 14 13:40:36 crc kubenswrapper[4725]: I1014 13:40:36.547897 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fb7769c-8d15-4925-8f3a-3f8e5a81fd72" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 14 13:40:36 crc kubenswrapper[4725]: I1014 13:40:36.547924 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="8755ee01-3d89-4936-9c92-105a1dbec414" containerName="registry-server" Oct 14 13:40:36 crc kubenswrapper[4725]: I1014 13:40:36.548738 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-frd7h" Oct 14 13:40:36 crc kubenswrapper[4725]: I1014 13:40:36.553978 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 13:40:36 crc kubenswrapper[4725]: I1014 13:40:36.554066 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 13:40:36 crc kubenswrapper[4725]: I1014 13:40:36.554069 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 13:40:36 crc kubenswrapper[4725]: I1014 13:40:36.554179 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qnvbv" Oct 14 13:40:36 crc kubenswrapper[4725]: I1014 13:40:36.558675 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-frd7h"] Oct 14 13:40:36 crc kubenswrapper[4725]: I1014 13:40:36.570813 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ba9f871-37fb-47be-b960-dc85368cff29-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-frd7h\" (UID: \"9ba9f871-37fb-47be-b960-dc85368cff29\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-frd7h" Oct 14 13:40:36 crc kubenswrapper[4725]: I1014 13:40:36.570883 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ba9f871-37fb-47be-b960-dc85368cff29-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-frd7h\" (UID: \"9ba9f871-37fb-47be-b960-dc85368cff29\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-frd7h" Oct 14 13:40:36 crc kubenswrapper[4725]: I1014 13:40:36.571003 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wldzk\" (UniqueName: \"kubernetes.io/projected/9ba9f871-37fb-47be-b960-dc85368cff29-kube-api-access-wldzk\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-frd7h\" (UID: \"9ba9f871-37fb-47be-b960-dc85368cff29\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-frd7h" Oct 14 13:40:36 crc kubenswrapper[4725]: I1014 13:40:36.672517 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ba9f871-37fb-47be-b960-dc85368cff29-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-frd7h\" (UID: \"9ba9f871-37fb-47be-b960-dc85368cff29\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-frd7h" Oct 14 13:40:36 crc kubenswrapper[4725]: I1014 13:40:36.672581 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ba9f871-37fb-47be-b960-dc85368cff29-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-frd7h\" (UID: \"9ba9f871-37fb-47be-b960-dc85368cff29\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-frd7h" Oct 14 13:40:36 crc kubenswrapper[4725]: I1014 13:40:36.672715 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wldzk\" (UniqueName: \"kubernetes.io/projected/9ba9f871-37fb-47be-b960-dc85368cff29-kube-api-access-wldzk\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-frd7h\" (UID: \"9ba9f871-37fb-47be-b960-dc85368cff29\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-frd7h" Oct 14 13:40:36 crc kubenswrapper[4725]: I1014 13:40:36.676783 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ba9f871-37fb-47be-b960-dc85368cff29-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-frd7h\" (UID: \"9ba9f871-37fb-47be-b960-dc85368cff29\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-frd7h" Oct 14 13:40:36 crc kubenswrapper[4725]: I1014 13:40:36.679679 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ba9f871-37fb-47be-b960-dc85368cff29-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-frd7h\" (UID: \"9ba9f871-37fb-47be-b960-dc85368cff29\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-frd7h" Oct 14 13:40:36 crc kubenswrapper[4725]: I1014 13:40:36.703236 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wldzk\" (UniqueName: \"kubernetes.io/projected/9ba9f871-37fb-47be-b960-dc85368cff29-kube-api-access-wldzk\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-frd7h\" (UID: \"9ba9f871-37fb-47be-b960-dc85368cff29\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-frd7h" Oct 14 13:40:36 crc kubenswrapper[4725]: I1014 13:40:36.883064 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-frd7h" Oct 14 13:40:37 crc kubenswrapper[4725]: I1014 13:40:37.531607 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-frd7h"] Oct 14 13:40:37 crc kubenswrapper[4725]: W1014 13:40:37.539692 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ba9f871_37fb_47be_b960_dc85368cff29.slice/crio-290abf65bea8d3ada30072573939af376983207cc65744857838448f6cffa5db WatchSource:0}: Error finding container 290abf65bea8d3ada30072573939af376983207cc65744857838448f6cffa5db: Status 404 returned error can't find the container with id 290abf65bea8d3ada30072573939af376983207cc65744857838448f6cffa5db Oct 14 13:40:38 crc kubenswrapper[4725]: I1014 13:40:38.479150 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-frd7h" event={"ID":"9ba9f871-37fb-47be-b960-dc85368cff29","Type":"ContainerStarted","Data":"36f15a996e38fef52a00109afc5a0e3334b4fa728fb86bcf2edbd93dfa4e8025"} Oct 14 13:40:38 crc kubenswrapper[4725]: I1014 13:40:38.480194 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-frd7h" event={"ID":"9ba9f871-37fb-47be-b960-dc85368cff29","Type":"ContainerStarted","Data":"290abf65bea8d3ada30072573939af376983207cc65744857838448f6cffa5db"} Oct 14 13:40:38 crc kubenswrapper[4725]: I1014 13:40:38.515989 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-frd7h" podStartSLOduration=1.948631808 podStartE2EDuration="2.515960172s" podCreationTimestamp="2025-10-14 13:40:36 +0000 UTC" firstStartedPulling="2025-10-14 13:40:37.54270115 +0000 UTC m=+1554.391136009" lastFinishedPulling="2025-10-14 13:40:38.110029544 +0000 UTC m=+1554.958464373" observedRunningTime="2025-10-14 13:40:38.49828346 +0000 UTC m=+1555.346718279" watchObservedRunningTime="2025-10-14 13:40:38.515960172 +0000 UTC m=+1555.364395021" Oct 14 13:40:47 crc kubenswrapper[4725]: I1014 13:40:47.922576 4725 scope.go:117] "RemoveContainer" containerID="6f767896ac68228fe09840fae7ddd8f8c1d269f215009c642a1baacf3a7849d4" Oct 14 13:40:47 crc kubenswrapper[4725]: E1014 13:40:47.923389 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:40:59 crc kubenswrapper[4725]: I1014 13:40:59.921815 4725 scope.go:117] "RemoveContainer" containerID="6f767896ac68228fe09840fae7ddd8f8c1d269f215009c642a1baacf3a7849d4" Oct 14 13:40:59 crc kubenswrapper[4725]: E1014 13:40:59.922725 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:41:11 crc kubenswrapper[4725]: I1014 13:41:11.921299 4725 scope.go:117] "RemoveContainer" containerID="6f767896ac68228fe09840fae7ddd8f8c1d269f215009c642a1baacf3a7849d4" Oct 14 13:41:11 crc kubenswrapper[4725]: E1014 13:41:11.922346 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:41:26 crc kubenswrapper[4725]: I1014 13:41:26.921383 4725 scope.go:117] "RemoveContainer" containerID="6f767896ac68228fe09840fae7ddd8f8c1d269f215009c642a1baacf3a7849d4" Oct 14 13:41:26 crc kubenswrapper[4725]: E1014 13:41:26.922579 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:41:31 crc kubenswrapper[4725]: I1014 13:41:31.041786 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-q7pwj"] Oct 14 13:41:31 crc kubenswrapper[4725]: I1014 13:41:31.050694 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-bs9b4"] Oct 14 13:41:31 crc kubenswrapper[4725]: I1014 13:41:31.060954 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-bs9b4"] Oct 14 13:41:31 crc kubenswrapper[4725]: I1014 13:41:31.069250 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-q7pwj"] Oct 14 13:41:31 crc kubenswrapper[4725]: I1014 13:41:31.941660 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fca1233-9113-4f55-8632-c75d41eabd80" path="/var/lib/kubelet/pods/1fca1233-9113-4f55-8632-c75d41eabd80/volumes" Oct 14 13:41:31 crc kubenswrapper[4725]: I1014 13:41:31.942911 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a8d6508-9075-409d-b8d8-ed03113819a1" path="/var/lib/kubelet/pods/4a8d6508-9075-409d-b8d8-ed03113819a1/volumes" Oct 14 13:41:32 crc kubenswrapper[4725]: I1014 13:41:32.054293 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-7598j"] Oct 14 13:41:32 crc kubenswrapper[4725]: I1014 13:41:32.062956 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-7598j"] Oct 14 13:41:33 crc kubenswrapper[4725]: I1014 13:41:33.939973 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c7bd34b-5329-4592-a439-7d5eaf070bfa" path="/var/lib/kubelet/pods/2c7bd34b-5329-4592-a439-7d5eaf070bfa/volumes" Oct 14 13:41:40 crc kubenswrapper[4725]: I1014 13:41:40.921198 4725 scope.go:117] "RemoveContainer" containerID="6f767896ac68228fe09840fae7ddd8f8c1d269f215009c642a1baacf3a7849d4" Oct 14 13:41:40 crc kubenswrapper[4725]: E1014 13:41:40.922009 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:41:42 crc kubenswrapper[4725]: I1014 13:41:42.061988 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-335f-account-create-5ksxz"] Oct 14 13:41:42 crc kubenswrapper[4725]: I1014 13:41:42.076436 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-9110-account-create-rdmx4"] Oct 14 13:41:42 crc kubenswrapper[4725]: I1014 13:41:42.090143 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-ae98-account-create-cl2rs"] Oct 14 13:41:42 crc kubenswrapper[4725]: I1014 13:41:42.101560 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-335f-account-create-5ksxz"] Oct 14 13:41:42 crc kubenswrapper[4725]: I1014 13:41:42.110401 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-ae98-account-create-cl2rs"] Oct 14 13:41:42 crc kubenswrapper[4725]: I1014 13:41:42.117069 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-9110-account-create-rdmx4"] Oct 14 13:41:43 crc kubenswrapper[4725]: I1014 13:41:43.938576 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14209ea5-288f-46d5-b9ee-860116dad16c" path="/var/lib/kubelet/pods/14209ea5-288f-46d5-b9ee-860116dad16c/volumes" Oct 14 13:41:43 crc kubenswrapper[4725]: I1014 13:41:43.940083 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20827da8-286c-48f4-ae94-d6de62502d1f" path="/var/lib/kubelet/pods/20827da8-286c-48f4-ae94-d6de62502d1f/volumes" Oct 14 13:41:43 crc kubenswrapper[4725]: I1014 13:41:43.941266 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2915d57-ca9a-4fff-ad1c-c51ae4f89775" path="/var/lib/kubelet/pods/c2915d57-ca9a-4fff-ad1c-c51ae4f89775/volumes" Oct 14 13:41:46 crc kubenswrapper[4725]: I1014 13:41:46.128716 4725 scope.go:117] "RemoveContainer" containerID="c04cc397ec46ab6329e384403f2d4e33b46a326ee9e90b9cf68993007c3d5b1b" Oct 14 13:41:46 crc kubenswrapper[4725]: I1014 13:41:46.160550 4725 scope.go:117] "RemoveContainer" containerID="78b8efc9be8f28637bd3f5011b9ff51a11508f7a312234900bcf5a21ba6071a1" Oct 14 13:41:46 crc kubenswrapper[4725]: I1014 13:41:46.213609 4725 scope.go:117] "RemoveContainer" containerID="46002d2a081ca4d5eff82609f27fe46ec88e562c31779ebb6fbc39e248ea1225" Oct 14 13:41:46 crc kubenswrapper[4725]: I1014 13:41:46.252988 4725 scope.go:117] "RemoveContainer" containerID="44a94dea2132f7abb9f12750d08d0e9c1a32705752da8ac9fcc661b68ab41be0" Oct 14 13:41:46 crc kubenswrapper[4725]: I1014 13:41:46.294549 4725 scope.go:117] "RemoveContainer" containerID="15814c7e8be7cb0545643bad06b141691c05a600c9c858a24c57a8908122e3e8" Oct 14 13:41:46 crc kubenswrapper[4725]: I1014 13:41:46.346541 4725 scope.go:117] "RemoveContainer" containerID="1b458b268dfcbfa776d3cc7d4ab7d5ddf51a8261c46fac13cdb54a3e77abfb0a" Oct 14 13:41:52 crc kubenswrapper[4725]: I1014 13:41:52.921307 4725 scope.go:117] "RemoveContainer" containerID="6f767896ac68228fe09840fae7ddd8f8c1d269f215009c642a1baacf3a7849d4" Oct 14 13:41:52 crc kubenswrapper[4725]: E1014 13:41:52.922204 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:42:00 crc kubenswrapper[4725]: I1014 13:42:00.052164 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-lspzt"] Oct 14 13:42:00 crc kubenswrapper[4725]: I1014 13:42:00.069575 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-xbh4s"] Oct 14 13:42:00 crc kubenswrapper[4725]: I1014 13:42:00.080038 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-2p9qc"] Oct 14 13:42:00 crc kubenswrapper[4725]: I1014 13:42:00.090068 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-xbh4s"] Oct 14 13:42:00 crc kubenswrapper[4725]: I1014 13:42:00.098663 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-2p9qc"] Oct 14 13:42:00 crc kubenswrapper[4725]: I1014 13:42:00.105980 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-lspzt"] Oct 14 13:42:01 crc kubenswrapper[4725]: I1014 13:42:01.933255 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a3f32f8-bd1d-47c8-ab85-6ca3d5e571fe" path="/var/lib/kubelet/pods/2a3f32f8-bd1d-47c8-ab85-6ca3d5e571fe/volumes" Oct 14 13:42:01 crc kubenswrapper[4725]: I1014 13:42:01.934367 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c91e6d9-e590-4467-81db-d9a571375693" path="/var/lib/kubelet/pods/9c91e6d9-e590-4467-81db-d9a571375693/volumes" Oct 14 13:42:01 crc kubenswrapper[4725]: I1014 13:42:01.935093 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a34964eb-f117-4f77-a5f8-bbc22ae15966" path="/var/lib/kubelet/pods/a34964eb-f117-4f77-a5f8-bbc22ae15966/volumes" Oct 14 13:42:05 crc kubenswrapper[4725]: I1014 13:42:05.053637 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-5j9rx"] Oct 14 13:42:05 crc kubenswrapper[4725]: I1014 13:42:05.060801 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-5j9rx"] Oct 14 13:42:05 crc kubenswrapper[4725]: I1014 13:42:05.946289 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b32894cc-6bf3-46d4-981c-be6040373b59" path="/var/lib/kubelet/pods/b32894cc-6bf3-46d4-981c-be6040373b59/volumes" Oct 14 13:42:06 crc kubenswrapper[4725]: I1014 13:42:06.431233 4725 generic.go:334] "Generic (PLEG): container finished" podID="9ba9f871-37fb-47be-b960-dc85368cff29" containerID="36f15a996e38fef52a00109afc5a0e3334b4fa728fb86bcf2edbd93dfa4e8025" exitCode=0 Oct 14 13:42:06 crc kubenswrapper[4725]: I1014 13:42:06.431310 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-frd7h" event={"ID":"9ba9f871-37fb-47be-b960-dc85368cff29","Type":"ContainerDied","Data":"36f15a996e38fef52a00109afc5a0e3334b4fa728fb86bcf2edbd93dfa4e8025"} Oct 14 13:42:06 crc kubenswrapper[4725]: I1014 13:42:06.921829 4725 scope.go:117] "RemoveContainer" containerID="6f767896ac68228fe09840fae7ddd8f8c1d269f215009c642a1baacf3a7849d4" Oct 14 13:42:06 crc kubenswrapper[4725]: E1014 13:42:06.922315 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:42:07 crc kubenswrapper[4725]: I1014 13:42:07.833086 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-frd7h" Oct 14 13:42:07 crc kubenswrapper[4725]: I1014 13:42:07.845258 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ba9f871-37fb-47be-b960-dc85368cff29-inventory\") pod \"9ba9f871-37fb-47be-b960-dc85368cff29\" (UID: \"9ba9f871-37fb-47be-b960-dc85368cff29\") " Oct 14 13:42:07 crc kubenswrapper[4725]: I1014 13:42:07.845398 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wldzk\" (UniqueName: \"kubernetes.io/projected/9ba9f871-37fb-47be-b960-dc85368cff29-kube-api-access-wldzk\") pod \"9ba9f871-37fb-47be-b960-dc85368cff29\" (UID: \"9ba9f871-37fb-47be-b960-dc85368cff29\") " Oct 14 13:42:07 crc kubenswrapper[4725]: I1014 13:42:07.845435 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ba9f871-37fb-47be-b960-dc85368cff29-ssh-key\") pod \"9ba9f871-37fb-47be-b960-dc85368cff29\" (UID: \"9ba9f871-37fb-47be-b960-dc85368cff29\") " Oct 14 13:42:07 crc kubenswrapper[4725]: I1014 13:42:07.856353 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ba9f871-37fb-47be-b960-dc85368cff29-kube-api-access-wldzk" (OuterVolumeSpecName: "kube-api-access-wldzk") pod "9ba9f871-37fb-47be-b960-dc85368cff29" (UID: "9ba9f871-37fb-47be-b960-dc85368cff29"). InnerVolumeSpecName "kube-api-access-wldzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:42:07 crc kubenswrapper[4725]: I1014 13:42:07.877574 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ba9f871-37fb-47be-b960-dc85368cff29-inventory" (OuterVolumeSpecName: "inventory") pod "9ba9f871-37fb-47be-b960-dc85368cff29" (UID: "9ba9f871-37fb-47be-b960-dc85368cff29"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:42:07 crc kubenswrapper[4725]: I1014 13:42:07.882860 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ba9f871-37fb-47be-b960-dc85368cff29-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9ba9f871-37fb-47be-b960-dc85368cff29" (UID: "9ba9f871-37fb-47be-b960-dc85368cff29"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:42:07 crc kubenswrapper[4725]: I1014 13:42:07.955187 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ba9f871-37fb-47be-b960-dc85368cff29-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 13:42:07 crc kubenswrapper[4725]: I1014 13:42:07.955284 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wldzk\" (UniqueName: \"kubernetes.io/projected/9ba9f871-37fb-47be-b960-dc85368cff29-kube-api-access-wldzk\") on node \"crc\" DevicePath \"\"" Oct 14 13:42:07 crc kubenswrapper[4725]: I1014 13:42:07.955301 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ba9f871-37fb-47be-b960-dc85368cff29-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 13:42:08 crc kubenswrapper[4725]: I1014 13:42:08.041745 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-x95f9"] Oct 14 13:42:08 crc kubenswrapper[4725]: I1014 13:42:08.052237 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-x95f9"] Oct 14 13:42:08 crc kubenswrapper[4725]: I1014 13:42:08.457413 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-frd7h" event={"ID":"9ba9f871-37fb-47be-b960-dc85368cff29","Type":"ContainerDied","Data":"290abf65bea8d3ada30072573939af376983207cc65744857838448f6cffa5db"} Oct 14 13:42:08 crc kubenswrapper[4725]: I1014 13:42:08.457506 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="290abf65bea8d3ada30072573939af376983207cc65744857838448f6cffa5db" Oct 14 13:42:08 crc kubenswrapper[4725]: I1014 13:42:08.457586 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-frd7h" Oct 14 13:42:08 crc kubenswrapper[4725]: I1014 13:42:08.572941 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rmqbp"] Oct 14 13:42:08 crc kubenswrapper[4725]: E1014 13:42:08.573770 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ba9f871-37fb-47be-b960-dc85368cff29" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Oct 14 13:42:08 crc kubenswrapper[4725]: I1014 13:42:08.573817 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ba9f871-37fb-47be-b960-dc85368cff29" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Oct 14 13:42:08 crc kubenswrapper[4725]: I1014 13:42:08.574378 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ba9f871-37fb-47be-b960-dc85368cff29" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Oct 14 13:42:08 crc kubenswrapper[4725]: I1014 13:42:08.575741 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rmqbp" Oct 14 13:42:08 crc kubenswrapper[4725]: I1014 13:42:08.578681 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 13:42:08 crc kubenswrapper[4725]: I1014 13:42:08.578980 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 13:42:08 crc kubenswrapper[4725]: I1014 13:42:08.580027 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 13:42:08 crc kubenswrapper[4725]: I1014 13:42:08.582053 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qnvbv" Oct 14 13:42:08 crc kubenswrapper[4725]: I1014 13:42:08.590848 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rmqbp"] Oct 14 13:42:08 crc kubenswrapper[4725]: I1014 13:42:08.770984 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd8fx\" (UniqueName: \"kubernetes.io/projected/0292d7e3-6419-47e4-aa3f-acdfe45f9f01-kube-api-access-cd8fx\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rmqbp\" (UID: \"0292d7e3-6419-47e4-aa3f-acdfe45f9f01\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rmqbp" Oct 14 13:42:08 crc kubenswrapper[4725]: I1014 13:42:08.771066 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0292d7e3-6419-47e4-aa3f-acdfe45f9f01-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rmqbp\" (UID: \"0292d7e3-6419-47e4-aa3f-acdfe45f9f01\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rmqbp" Oct 14 13:42:08 crc kubenswrapper[4725]: I1014 13:42:08.771137 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0292d7e3-6419-47e4-aa3f-acdfe45f9f01-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rmqbp\" (UID: \"0292d7e3-6419-47e4-aa3f-acdfe45f9f01\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rmqbp" Oct 14 13:42:08 crc kubenswrapper[4725]: I1014 13:42:08.873399 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd8fx\" (UniqueName: \"kubernetes.io/projected/0292d7e3-6419-47e4-aa3f-acdfe45f9f01-kube-api-access-cd8fx\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rmqbp\" (UID: \"0292d7e3-6419-47e4-aa3f-acdfe45f9f01\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rmqbp" Oct 14 13:42:08 crc kubenswrapper[4725]: I1014 13:42:08.873553 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0292d7e3-6419-47e4-aa3f-acdfe45f9f01-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rmqbp\" (UID: \"0292d7e3-6419-47e4-aa3f-acdfe45f9f01\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rmqbp" Oct 14 13:42:08 crc kubenswrapper[4725]: I1014 13:42:08.873711 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0292d7e3-6419-47e4-aa3f-acdfe45f9f01-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rmqbp\" (UID: \"0292d7e3-6419-47e4-aa3f-acdfe45f9f01\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rmqbp" Oct 14 13:42:08 crc kubenswrapper[4725]: I1014 13:42:08.880440 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0292d7e3-6419-47e4-aa3f-acdfe45f9f01-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rmqbp\" (UID: \"0292d7e3-6419-47e4-aa3f-acdfe45f9f01\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rmqbp" Oct 14 13:42:08 crc kubenswrapper[4725]: I1014 13:42:08.880673 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0292d7e3-6419-47e4-aa3f-acdfe45f9f01-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rmqbp\" (UID: \"0292d7e3-6419-47e4-aa3f-acdfe45f9f01\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rmqbp" Oct 14 13:42:08 crc kubenswrapper[4725]: I1014 13:42:08.902438 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd8fx\" (UniqueName: \"kubernetes.io/projected/0292d7e3-6419-47e4-aa3f-acdfe45f9f01-kube-api-access-cd8fx\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rmqbp\" (UID: \"0292d7e3-6419-47e4-aa3f-acdfe45f9f01\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rmqbp" Oct 14 13:42:09 crc kubenswrapper[4725]: I1014 13:42:09.038921 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-d39e-account-create-r5sdl"] Oct 14 13:42:09 crc kubenswrapper[4725]: I1014 13:42:09.053704 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-d39e-account-create-r5sdl"] Oct 14 13:42:09 crc kubenswrapper[4725]: I1014 13:42:09.062138 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-5a30-account-create-r6ccp"] Oct 14 13:42:09 crc kubenswrapper[4725]: I1014 13:42:09.069339 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-5a30-account-create-r6ccp"] Oct 14 13:42:09 crc kubenswrapper[4725]: I1014 13:42:09.197192 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rmqbp" Oct 14 13:42:09 crc kubenswrapper[4725]: I1014 13:42:09.727910 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rmqbp"] Oct 14 13:42:09 crc kubenswrapper[4725]: I1014 13:42:09.735044 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 13:42:09 crc kubenswrapper[4725]: I1014 13:42:09.933950 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35b7e347-c4b6-450d-88c1-f352d4301fed" path="/var/lib/kubelet/pods/35b7e347-c4b6-450d-88c1-f352d4301fed/volumes" Oct 14 13:42:09 crc kubenswrapper[4725]: I1014 13:42:09.935500 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80a33093-d85c-4406-9308-38a8b757040b" path="/var/lib/kubelet/pods/80a33093-d85c-4406-9308-38a8b757040b/volumes" Oct 14 13:42:09 crc kubenswrapper[4725]: I1014 13:42:09.936031 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e18d471e-098d-4024-9e83-ed212464dad9" path="/var/lib/kubelet/pods/e18d471e-098d-4024-9e83-ed212464dad9/volumes" Oct 14 13:42:10 crc kubenswrapper[4725]: I1014 13:42:10.029807 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c12d-account-create-m5xhh"] Oct 14 13:42:10 crc kubenswrapper[4725]: I1014 13:42:10.037692 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-c12d-account-create-m5xhh"] Oct 14 13:42:10 crc kubenswrapper[4725]: I1014 13:42:10.498744 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rmqbp" event={"ID":"0292d7e3-6419-47e4-aa3f-acdfe45f9f01","Type":"ContainerStarted","Data":"dcf6a89fbe77bbce3d74b8e38d862f740742744c0460f275a9b736eb5aba6f38"} Oct 14 13:42:10 crc kubenswrapper[4725]: I1014 13:42:10.499170 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rmqbp" event={"ID":"0292d7e3-6419-47e4-aa3f-acdfe45f9f01","Type":"ContainerStarted","Data":"c5a40a93cb524d129f9f83c86df7a9be608cf3d5c8a519686a4e2bc640f4eb7d"} Oct 14 13:42:10 crc kubenswrapper[4725]: I1014 13:42:10.525960 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rmqbp" podStartSLOduration=2.057617639 podStartE2EDuration="2.52594124s" podCreationTimestamp="2025-10-14 13:42:08 +0000 UTC" firstStartedPulling="2025-10-14 13:42:09.734730053 +0000 UTC m=+1646.583164862" lastFinishedPulling="2025-10-14 13:42:10.203053654 +0000 UTC m=+1647.051488463" observedRunningTime="2025-10-14 13:42:10.521272061 +0000 UTC m=+1647.369706870" watchObservedRunningTime="2025-10-14 13:42:10.52594124 +0000 UTC m=+1647.374376039" Oct 14 13:42:11 crc kubenswrapper[4725]: I1014 13:42:11.952012 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="553743dd-4497-4092-8723-3c03580db239" path="/var/lib/kubelet/pods/553743dd-4497-4092-8723-3c03580db239/volumes" Oct 14 13:42:19 crc kubenswrapper[4725]: I1014 13:42:19.921713 4725 scope.go:117] "RemoveContainer" containerID="6f767896ac68228fe09840fae7ddd8f8c1d269f215009c642a1baacf3a7849d4" Oct 14 13:42:19 crc kubenswrapper[4725]: E1014 13:42:19.922432 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:42:34 crc kubenswrapper[4725]: I1014 13:42:34.921710 4725 scope.go:117] "RemoveContainer" containerID="6f767896ac68228fe09840fae7ddd8f8c1d269f215009c642a1baacf3a7849d4" Oct 14 13:42:34 crc kubenswrapper[4725]: E1014 13:42:34.922318 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:42:40 crc kubenswrapper[4725]: I1014 13:42:40.087553 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-5sg8k"] Oct 14 13:42:40 crc kubenswrapper[4725]: I1014 13:42:40.097380 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-5sg8k"] Oct 14 13:42:41 crc kubenswrapper[4725]: I1014 13:42:41.939489 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0" path="/var/lib/kubelet/pods/9ef8f3bd-56a2-4a00-8ad1-09f6771d02c0/volumes" Oct 14 13:42:44 crc kubenswrapper[4725]: I1014 13:42:44.043018 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-fgqxw"] Oct 14 13:42:44 crc kubenswrapper[4725]: I1014 13:42:44.056722 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-fgqxw"] Oct 14 13:42:45 crc kubenswrapper[4725]: I1014 13:42:45.934366 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d968db2-d49e-4eee-9927-11fd32b9cd89" path="/var/lib/kubelet/pods/7d968db2-d49e-4eee-9927-11fd32b9cd89/volumes" Oct 14 13:42:46 crc kubenswrapper[4725]: I1014 13:42:46.509200 4725 scope.go:117] "RemoveContainer" containerID="ccd77cba56bcb481918438a50be2c8a2164a709fe2f4b8ba0be0f69f120fae38" Oct 14 13:42:46 crc kubenswrapper[4725]: I1014 13:42:46.549509 4725 scope.go:117] "RemoveContainer" containerID="3a760e040e946e58c2caa0b2189ed2e47fe00bb9932bce387dc8a372eff04c60" Oct 14 13:42:46 crc kubenswrapper[4725]: I1014 13:42:46.588021 4725 scope.go:117] "RemoveContainer" containerID="bcd59c199f1ffb1f3eea3423994ce494df0c2dc612ee5d525905a069776ec8a7" Oct 14 13:42:46 crc kubenswrapper[4725]: I1014 13:42:46.631652 4725 scope.go:117] "RemoveContainer" containerID="4b5104dd6a19fb6f62cda6b685dbf82642a7eef3812133d369a0aeb4d69c21c3" Oct 14 13:42:46 crc kubenswrapper[4725]: I1014 13:42:46.678032 4725 scope.go:117] "RemoveContainer" containerID="9b0a728ed29905664ea6e13e6214b56b36b8af5b20176fbc4a20a1123e1ddfdf" Oct 14 13:42:46 crc kubenswrapper[4725]: I1014 13:42:46.738190 4725 scope.go:117] "RemoveContainer" containerID="43c4927714fb8b7e1659f7d263e061d3f13e6778d651fd5983c683b963b584ae" Oct 14 13:42:46 crc kubenswrapper[4725]: I1014 13:42:46.790656 4725 scope.go:117] "RemoveContainer" containerID="1add8eb3ccc3329b410231e970f2cdcd827d32cc14a769fea42289d6cd77b06a" Oct 14 13:42:46 crc kubenswrapper[4725]: I1014 13:42:46.830157 4725 scope.go:117] "RemoveContainer" containerID="35086069c239f4b52ceaa888fba55e318a12b66b854c9a72abd62f6499dbac7e" Oct 14 13:42:46 crc kubenswrapper[4725]: I1014 13:42:46.855640 4725 scope.go:117] "RemoveContainer" containerID="12e1f99cec1847abbaa03479237d08cf7c04a9c377c0c7e4cd3e9ac509590f5e" Oct 14 13:42:46 crc kubenswrapper[4725]: I1014 13:42:46.881539 4725 scope.go:117] "RemoveContainer" containerID="0de015714ba68014cf3ba29e0b115fe0ab458bc94916393b0b696298470b7b26" Oct 14 13:42:48 crc kubenswrapper[4725]: I1014 13:42:48.923003 4725 scope.go:117] "RemoveContainer" containerID="6f767896ac68228fe09840fae7ddd8f8c1d269f215009c642a1baacf3a7849d4" Oct 14 13:42:48 crc kubenswrapper[4725]: E1014 13:42:48.923798 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:42:49 crc kubenswrapper[4725]: I1014 13:42:49.041842 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-tlq7r"] Oct 14 13:42:49 crc kubenswrapper[4725]: I1014 13:42:49.051383 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-tlq7r"] Oct 14 13:42:49 crc kubenswrapper[4725]: I1014 13:42:49.937186 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22b0bf13-3251-446f-946b-273f89349427" path="/var/lib/kubelet/pods/22b0bf13-3251-446f-946b-273f89349427/volumes" Oct 14 13:43:00 crc kubenswrapper[4725]: I1014 13:43:00.043339 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-rwcgw"] Oct 14 13:43:00 crc kubenswrapper[4725]: I1014 13:43:00.054358 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-rwcgw"] Oct 14 13:43:00 crc kubenswrapper[4725]: I1014 13:43:00.922259 4725 scope.go:117] "RemoveContainer" containerID="6f767896ac68228fe09840fae7ddd8f8c1d269f215009c642a1baacf3a7849d4" Oct 14 13:43:00 crc kubenswrapper[4725]: E1014 13:43:00.923039 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:43:01 crc kubenswrapper[4725]: I1014 13:43:01.034349 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-gzbkx"] Oct 14 13:43:01 crc kubenswrapper[4725]: I1014 13:43:01.042129 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-gzbkx"] Oct 14 13:43:01 crc kubenswrapper[4725]: I1014 13:43:01.933392 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f19eb4e-4359-4092-a050-e1d695fbb891" path="/var/lib/kubelet/pods/1f19eb4e-4359-4092-a050-e1d695fbb891/volumes" Oct 14 13:43:01 crc kubenswrapper[4725]: I1014 13:43:01.934384 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec415043-bd33-4ab3-8437-28eda0458656" path="/var/lib/kubelet/pods/ec415043-bd33-4ab3-8437-28eda0458656/volumes" Oct 14 13:43:13 crc kubenswrapper[4725]: I1014 13:43:13.926402 4725 scope.go:117] "RemoveContainer" containerID="6f767896ac68228fe09840fae7ddd8f8c1d269f215009c642a1baacf3a7849d4" Oct 14 13:43:13 crc kubenswrapper[4725]: E1014 13:43:13.927134 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:43:22 crc kubenswrapper[4725]: I1014 13:43:22.331604 4725 generic.go:334] "Generic (PLEG): container finished" podID="0292d7e3-6419-47e4-aa3f-acdfe45f9f01" containerID="dcf6a89fbe77bbce3d74b8e38d862f740742744c0460f275a9b736eb5aba6f38" exitCode=0 Oct 14 13:43:22 crc kubenswrapper[4725]: I1014 13:43:22.331723 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rmqbp" event={"ID":"0292d7e3-6419-47e4-aa3f-acdfe45f9f01","Type":"ContainerDied","Data":"dcf6a89fbe77bbce3d74b8e38d862f740742744c0460f275a9b736eb5aba6f38"} Oct 14 13:43:23 crc kubenswrapper[4725]: I1014 13:43:23.784798 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rmqbp" Oct 14 13:43:23 crc kubenswrapper[4725]: I1014 13:43:23.872739 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0292d7e3-6419-47e4-aa3f-acdfe45f9f01-inventory\") pod \"0292d7e3-6419-47e4-aa3f-acdfe45f9f01\" (UID: \"0292d7e3-6419-47e4-aa3f-acdfe45f9f01\") " Oct 14 13:43:23 crc kubenswrapper[4725]: I1014 13:43:23.873101 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0292d7e3-6419-47e4-aa3f-acdfe45f9f01-ssh-key\") pod \"0292d7e3-6419-47e4-aa3f-acdfe45f9f01\" (UID: \"0292d7e3-6419-47e4-aa3f-acdfe45f9f01\") " Oct 14 13:43:23 crc kubenswrapper[4725]: I1014 13:43:23.873306 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cd8fx\" (UniqueName: \"kubernetes.io/projected/0292d7e3-6419-47e4-aa3f-acdfe45f9f01-kube-api-access-cd8fx\") pod \"0292d7e3-6419-47e4-aa3f-acdfe45f9f01\" (UID: \"0292d7e3-6419-47e4-aa3f-acdfe45f9f01\") " Oct 14 13:43:23 crc kubenswrapper[4725]: I1014 13:43:23.878901 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0292d7e3-6419-47e4-aa3f-acdfe45f9f01-kube-api-access-cd8fx" (OuterVolumeSpecName: "kube-api-access-cd8fx") pod "0292d7e3-6419-47e4-aa3f-acdfe45f9f01" (UID: "0292d7e3-6419-47e4-aa3f-acdfe45f9f01"). InnerVolumeSpecName "kube-api-access-cd8fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:43:23 crc kubenswrapper[4725]: I1014 13:43:23.905351 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0292d7e3-6419-47e4-aa3f-acdfe45f9f01-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0292d7e3-6419-47e4-aa3f-acdfe45f9f01" (UID: "0292d7e3-6419-47e4-aa3f-acdfe45f9f01"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:43:23 crc kubenswrapper[4725]: I1014 13:43:23.927229 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0292d7e3-6419-47e4-aa3f-acdfe45f9f01-inventory" (OuterVolumeSpecName: "inventory") pod "0292d7e3-6419-47e4-aa3f-acdfe45f9f01" (UID: "0292d7e3-6419-47e4-aa3f-acdfe45f9f01"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:43:23 crc kubenswrapper[4725]: I1014 13:43:23.975444 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cd8fx\" (UniqueName: \"kubernetes.io/projected/0292d7e3-6419-47e4-aa3f-acdfe45f9f01-kube-api-access-cd8fx\") on node \"crc\" DevicePath \"\"" Oct 14 13:43:23 crc kubenswrapper[4725]: I1014 13:43:23.975502 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0292d7e3-6419-47e4-aa3f-acdfe45f9f01-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 13:43:23 crc kubenswrapper[4725]: I1014 13:43:23.975516 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0292d7e3-6419-47e4-aa3f-acdfe45f9f01-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 13:43:24 crc kubenswrapper[4725]: I1014 13:43:24.356386 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rmqbp" event={"ID":"0292d7e3-6419-47e4-aa3f-acdfe45f9f01","Type":"ContainerDied","Data":"c5a40a93cb524d129f9f83c86df7a9be608cf3d5c8a519686a4e2bc640f4eb7d"} Oct 14 13:43:24 crc kubenswrapper[4725]: I1014 13:43:24.356693 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5a40a93cb524d129f9f83c86df7a9be608cf3d5c8a519686a4e2bc640f4eb7d" Oct 14 13:43:24 crc kubenswrapper[4725]: I1014 13:43:24.356476 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rmqbp" Oct 14 13:43:24 crc kubenswrapper[4725]: I1014 13:43:24.461957 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mlbgm"] Oct 14 13:43:24 crc kubenswrapper[4725]: E1014 13:43:24.462412 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0292d7e3-6419-47e4-aa3f-acdfe45f9f01" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 14 13:43:24 crc kubenswrapper[4725]: I1014 13:43:24.462634 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="0292d7e3-6419-47e4-aa3f-acdfe45f9f01" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 14 13:43:24 crc kubenswrapper[4725]: I1014 13:43:24.462871 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="0292d7e3-6419-47e4-aa3f-acdfe45f9f01" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 14 13:43:24 crc kubenswrapper[4725]: I1014 13:43:24.463784 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mlbgm" Oct 14 13:43:24 crc kubenswrapper[4725]: I1014 13:43:24.475310 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mlbgm"] Oct 14 13:43:24 crc kubenswrapper[4725]: I1014 13:43:24.500109 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 13:43:24 crc kubenswrapper[4725]: I1014 13:43:24.500414 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 13:43:24 crc kubenswrapper[4725]: I1014 13:43:24.500637 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 13:43:24 crc kubenswrapper[4725]: I1014 13:43:24.500892 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qnvbv" Oct 14 13:43:24 crc kubenswrapper[4725]: I1014 13:43:24.606078 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/08321cb0-edbc-4ee5-8f5e-38084f62802a-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mlbgm\" (UID: \"08321cb0-edbc-4ee5-8f5e-38084f62802a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mlbgm" Oct 14 13:43:24 crc kubenswrapper[4725]: I1014 13:43:24.606363 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08321cb0-edbc-4ee5-8f5e-38084f62802a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mlbgm\" (UID: \"08321cb0-edbc-4ee5-8f5e-38084f62802a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mlbgm" Oct 14 13:43:24 crc kubenswrapper[4725]: I1014 13:43:24.606418 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7cn9\" (UniqueName: \"kubernetes.io/projected/08321cb0-edbc-4ee5-8f5e-38084f62802a-kube-api-access-h7cn9\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mlbgm\" (UID: \"08321cb0-edbc-4ee5-8f5e-38084f62802a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mlbgm" Oct 14 13:43:24 crc kubenswrapper[4725]: I1014 13:43:24.708150 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/08321cb0-edbc-4ee5-8f5e-38084f62802a-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mlbgm\" (UID: \"08321cb0-edbc-4ee5-8f5e-38084f62802a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mlbgm" Oct 14 13:43:24 crc kubenswrapper[4725]: I1014 13:43:24.708244 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08321cb0-edbc-4ee5-8f5e-38084f62802a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mlbgm\" (UID: \"08321cb0-edbc-4ee5-8f5e-38084f62802a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mlbgm" Oct 14 13:43:24 crc kubenswrapper[4725]: I1014 13:43:24.708393 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7cn9\" (UniqueName: \"kubernetes.io/projected/08321cb0-edbc-4ee5-8f5e-38084f62802a-kube-api-access-h7cn9\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mlbgm\" (UID: \"08321cb0-edbc-4ee5-8f5e-38084f62802a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mlbgm" Oct 14 13:43:24 crc kubenswrapper[4725]: I1014 13:43:24.712876 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08321cb0-edbc-4ee5-8f5e-38084f62802a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mlbgm\" (UID: \"08321cb0-edbc-4ee5-8f5e-38084f62802a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mlbgm" Oct 14 13:43:24 crc kubenswrapper[4725]: I1014 13:43:24.713037 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/08321cb0-edbc-4ee5-8f5e-38084f62802a-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mlbgm\" (UID: \"08321cb0-edbc-4ee5-8f5e-38084f62802a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mlbgm" Oct 14 13:43:24 crc kubenswrapper[4725]: I1014 13:43:24.739422 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7cn9\" (UniqueName: \"kubernetes.io/projected/08321cb0-edbc-4ee5-8f5e-38084f62802a-kube-api-access-h7cn9\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mlbgm\" (UID: \"08321cb0-edbc-4ee5-8f5e-38084f62802a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mlbgm" Oct 14 13:43:24 crc kubenswrapper[4725]: I1014 13:43:24.824112 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mlbgm" Oct 14 13:43:24 crc kubenswrapper[4725]: I1014 13:43:24.921663 4725 scope.go:117] "RemoveContainer" containerID="6f767896ac68228fe09840fae7ddd8f8c1d269f215009c642a1baacf3a7849d4" Oct 14 13:43:24 crc kubenswrapper[4725]: E1014 13:43:24.922242 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:43:25 crc kubenswrapper[4725]: I1014 13:43:25.377494 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mlbgm"] Oct 14 13:43:25 crc kubenswrapper[4725]: W1014 13:43:25.379929 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08321cb0_edbc_4ee5_8f5e_38084f62802a.slice/crio-76057b55b01eb6e3dcc43e8c62027389365912eb5ea5f430239774c0f7b67fed WatchSource:0}: Error finding container 76057b55b01eb6e3dcc43e8c62027389365912eb5ea5f430239774c0f7b67fed: Status 404 returned error can't find the container with id 76057b55b01eb6e3dcc43e8c62027389365912eb5ea5f430239774c0f7b67fed Oct 14 13:43:26 crc kubenswrapper[4725]: I1014 13:43:26.382036 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mlbgm" event={"ID":"08321cb0-edbc-4ee5-8f5e-38084f62802a","Type":"ContainerStarted","Data":"ac92d96effcbd01c28f7364a85d695b4a3511b471fdd3e2f34cc5105363522f6"} Oct 14 13:43:26 crc kubenswrapper[4725]: I1014 13:43:26.382945 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mlbgm" event={"ID":"08321cb0-edbc-4ee5-8f5e-38084f62802a","Type":"ContainerStarted","Data":"76057b55b01eb6e3dcc43e8c62027389365912eb5ea5f430239774c0f7b67fed"} Oct 14 13:43:26 crc kubenswrapper[4725]: I1014 13:43:26.417342 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mlbgm" podStartSLOduration=1.878368791 podStartE2EDuration="2.417312993s" podCreationTimestamp="2025-10-14 13:43:24 +0000 UTC" firstStartedPulling="2025-10-14 13:43:25.383480577 +0000 UTC m=+1722.231915426" lastFinishedPulling="2025-10-14 13:43:25.922424809 +0000 UTC m=+1722.770859628" observedRunningTime="2025-10-14 13:43:26.401202238 +0000 UTC m=+1723.249637117" watchObservedRunningTime="2025-10-14 13:43:26.417312993 +0000 UTC m=+1723.265747842" Oct 14 13:43:31 crc kubenswrapper[4725]: I1014 13:43:31.437582 4725 generic.go:334] "Generic (PLEG): container finished" podID="08321cb0-edbc-4ee5-8f5e-38084f62802a" containerID="ac92d96effcbd01c28f7364a85d695b4a3511b471fdd3e2f34cc5105363522f6" exitCode=0 Oct 14 13:43:31 crc kubenswrapper[4725]: I1014 13:43:31.437656 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mlbgm" event={"ID":"08321cb0-edbc-4ee5-8f5e-38084f62802a","Type":"ContainerDied","Data":"ac92d96effcbd01c28f7364a85d695b4a3511b471fdd3e2f34cc5105363522f6"} Oct 14 13:43:32 crc kubenswrapper[4725]: I1014 13:43:32.827565 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mlbgm" Oct 14 13:43:32 crc kubenswrapper[4725]: I1014 13:43:32.994013 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7cn9\" (UniqueName: \"kubernetes.io/projected/08321cb0-edbc-4ee5-8f5e-38084f62802a-kube-api-access-h7cn9\") pod \"08321cb0-edbc-4ee5-8f5e-38084f62802a\" (UID: \"08321cb0-edbc-4ee5-8f5e-38084f62802a\") " Oct 14 13:43:32 crc kubenswrapper[4725]: I1014 13:43:32.994403 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/08321cb0-edbc-4ee5-8f5e-38084f62802a-ssh-key\") pod \"08321cb0-edbc-4ee5-8f5e-38084f62802a\" (UID: \"08321cb0-edbc-4ee5-8f5e-38084f62802a\") " Oct 14 13:43:32 crc kubenswrapper[4725]: I1014 13:43:32.994507 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08321cb0-edbc-4ee5-8f5e-38084f62802a-inventory\") pod \"08321cb0-edbc-4ee5-8f5e-38084f62802a\" (UID: \"08321cb0-edbc-4ee5-8f5e-38084f62802a\") " Oct 14 13:43:33 crc kubenswrapper[4725]: I1014 13:43:32.999980 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08321cb0-edbc-4ee5-8f5e-38084f62802a-kube-api-access-h7cn9" (OuterVolumeSpecName: "kube-api-access-h7cn9") pod "08321cb0-edbc-4ee5-8f5e-38084f62802a" (UID: "08321cb0-edbc-4ee5-8f5e-38084f62802a"). InnerVolumeSpecName "kube-api-access-h7cn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:43:33 crc kubenswrapper[4725]: I1014 13:43:33.031638 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08321cb0-edbc-4ee5-8f5e-38084f62802a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "08321cb0-edbc-4ee5-8f5e-38084f62802a" (UID: "08321cb0-edbc-4ee5-8f5e-38084f62802a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:43:33 crc kubenswrapper[4725]: I1014 13:43:33.041017 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08321cb0-edbc-4ee5-8f5e-38084f62802a-inventory" (OuterVolumeSpecName: "inventory") pod "08321cb0-edbc-4ee5-8f5e-38084f62802a" (UID: "08321cb0-edbc-4ee5-8f5e-38084f62802a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:43:33 crc kubenswrapper[4725]: I1014 13:43:33.097390 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7cn9\" (UniqueName: \"kubernetes.io/projected/08321cb0-edbc-4ee5-8f5e-38084f62802a-kube-api-access-h7cn9\") on node \"crc\" DevicePath \"\"" Oct 14 13:43:33 crc kubenswrapper[4725]: I1014 13:43:33.097439 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/08321cb0-edbc-4ee5-8f5e-38084f62802a-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 13:43:33 crc kubenswrapper[4725]: I1014 13:43:33.097487 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/08321cb0-edbc-4ee5-8f5e-38084f62802a-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 13:43:33 crc kubenswrapper[4725]: I1014 13:43:33.493682 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mlbgm" event={"ID":"08321cb0-edbc-4ee5-8f5e-38084f62802a","Type":"ContainerDied","Data":"76057b55b01eb6e3dcc43e8c62027389365912eb5ea5f430239774c0f7b67fed"} Oct 14 13:43:33 crc kubenswrapper[4725]: I1014 13:43:33.493738 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76057b55b01eb6e3dcc43e8c62027389365912eb5ea5f430239774c0f7b67fed" Oct 14 13:43:33 crc kubenswrapper[4725]: I1014 13:43:33.493742 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mlbgm" Oct 14 13:43:33 crc kubenswrapper[4725]: I1014 13:43:33.576213 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-h7bvm"] Oct 14 13:43:33 crc kubenswrapper[4725]: E1014 13:43:33.578646 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08321cb0-edbc-4ee5-8f5e-38084f62802a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 14 13:43:33 crc kubenswrapper[4725]: I1014 13:43:33.578686 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="08321cb0-edbc-4ee5-8f5e-38084f62802a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 14 13:43:33 crc kubenswrapper[4725]: I1014 13:43:33.579025 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="08321cb0-edbc-4ee5-8f5e-38084f62802a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 14 13:43:33 crc kubenswrapper[4725]: I1014 13:43:33.579725 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h7bvm" Oct 14 13:43:33 crc kubenswrapper[4725]: I1014 13:43:33.582247 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 13:43:33 crc kubenswrapper[4725]: I1014 13:43:33.582580 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 13:43:33 crc kubenswrapper[4725]: I1014 13:43:33.583038 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qnvbv" Oct 14 13:43:33 crc kubenswrapper[4725]: I1014 13:43:33.584408 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 13:43:33 crc kubenswrapper[4725]: I1014 13:43:33.597937 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-h7bvm"] Oct 14 13:43:33 crc kubenswrapper[4725]: I1014 13:43:33.712427 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8b1732ba-bcae-4c5f-96d2-230b650b8552-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-h7bvm\" (UID: \"8b1732ba-bcae-4c5f-96d2-230b650b8552\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h7bvm" Oct 14 13:43:33 crc kubenswrapper[4725]: I1014 13:43:33.712771 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9x4k\" (UniqueName: \"kubernetes.io/projected/8b1732ba-bcae-4c5f-96d2-230b650b8552-kube-api-access-x9x4k\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-h7bvm\" (UID: \"8b1732ba-bcae-4c5f-96d2-230b650b8552\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h7bvm" Oct 14 13:43:33 crc kubenswrapper[4725]: I1014 13:43:33.712897 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b1732ba-bcae-4c5f-96d2-230b650b8552-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-h7bvm\" (UID: \"8b1732ba-bcae-4c5f-96d2-230b650b8552\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h7bvm" Oct 14 13:43:33 crc kubenswrapper[4725]: I1014 13:43:33.815087 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8b1732ba-bcae-4c5f-96d2-230b650b8552-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-h7bvm\" (UID: \"8b1732ba-bcae-4c5f-96d2-230b650b8552\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h7bvm" Oct 14 13:43:33 crc kubenswrapper[4725]: I1014 13:43:33.815298 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9x4k\" (UniqueName: \"kubernetes.io/projected/8b1732ba-bcae-4c5f-96d2-230b650b8552-kube-api-access-x9x4k\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-h7bvm\" (UID: \"8b1732ba-bcae-4c5f-96d2-230b650b8552\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h7bvm" Oct 14 13:43:33 crc kubenswrapper[4725]: I1014 13:43:33.815367 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b1732ba-bcae-4c5f-96d2-230b650b8552-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-h7bvm\" (UID: \"8b1732ba-bcae-4c5f-96d2-230b650b8552\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h7bvm" Oct 14 13:43:33 crc kubenswrapper[4725]: I1014 13:43:33.820570 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8b1732ba-bcae-4c5f-96d2-230b650b8552-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-h7bvm\" (UID: \"8b1732ba-bcae-4c5f-96d2-230b650b8552\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h7bvm" Oct 14 13:43:33 crc kubenswrapper[4725]: I1014 13:43:33.821011 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b1732ba-bcae-4c5f-96d2-230b650b8552-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-h7bvm\" (UID: \"8b1732ba-bcae-4c5f-96d2-230b650b8552\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h7bvm" Oct 14 13:43:33 crc kubenswrapper[4725]: I1014 13:43:33.834015 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9x4k\" (UniqueName: \"kubernetes.io/projected/8b1732ba-bcae-4c5f-96d2-230b650b8552-kube-api-access-x9x4k\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-h7bvm\" (UID: \"8b1732ba-bcae-4c5f-96d2-230b650b8552\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h7bvm" Oct 14 13:43:33 crc kubenswrapper[4725]: I1014 13:43:33.913111 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h7bvm" Oct 14 13:43:34 crc kubenswrapper[4725]: I1014 13:43:34.569409 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-h7bvm"] Oct 14 13:43:35 crc kubenswrapper[4725]: I1014 13:43:35.518061 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h7bvm" event={"ID":"8b1732ba-bcae-4c5f-96d2-230b650b8552","Type":"ContainerStarted","Data":"2b72a4407ef7c62849df0305f30dca823b9ba35c7d5d059f9b28e7fda85296b0"} Oct 14 13:43:35 crc kubenswrapper[4725]: I1014 13:43:35.518409 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h7bvm" event={"ID":"8b1732ba-bcae-4c5f-96d2-230b650b8552","Type":"ContainerStarted","Data":"e794291ce325e196de71e697cc295931b6548a1a4c304c0c64e5876e49195fe6"} Oct 14 13:43:35 crc kubenswrapper[4725]: I1014 13:43:35.541192 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h7bvm" podStartSLOduration=2.120257954 podStartE2EDuration="2.541122076s" podCreationTimestamp="2025-10-14 13:43:33 +0000 UTC" firstStartedPulling="2025-10-14 13:43:34.58046539 +0000 UTC m=+1731.428900199" lastFinishedPulling="2025-10-14 13:43:35.001329512 +0000 UTC m=+1731.849764321" observedRunningTime="2025-10-14 13:43:35.538964027 +0000 UTC m=+1732.387398836" watchObservedRunningTime="2025-10-14 13:43:35.541122076 +0000 UTC m=+1732.389556925" Oct 14 13:43:35 crc kubenswrapper[4725]: I1014 13:43:35.922716 4725 scope.go:117] "RemoveContainer" containerID="6f767896ac68228fe09840fae7ddd8f8c1d269f215009c642a1baacf3a7849d4" Oct 14 13:43:35 crc kubenswrapper[4725]: E1014 13:43:35.923422 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:43:40 crc kubenswrapper[4725]: I1014 13:43:40.055274 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-lp5lp"] Oct 14 13:43:40 crc kubenswrapper[4725]: I1014 13:43:40.067074 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-42h4h"] Oct 14 13:43:40 crc kubenswrapper[4725]: I1014 13:43:40.079324 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-4vdpc"] Oct 14 13:43:40 crc kubenswrapper[4725]: I1014 13:43:40.087692 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-42h4h"] Oct 14 13:43:40 crc kubenswrapper[4725]: I1014 13:43:40.094593 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-lp5lp"] Oct 14 13:43:40 crc kubenswrapper[4725]: I1014 13:43:40.101434 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-4vdpc"] Oct 14 13:43:41 crc kubenswrapper[4725]: I1014 13:43:41.934338 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bbd0716-da3f-4287-9ede-533325e1b42e" path="/var/lib/kubelet/pods/6bbd0716-da3f-4287-9ede-533325e1b42e/volumes" Oct 14 13:43:41 crc kubenswrapper[4725]: I1014 13:43:41.935903 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97761d43-e004-4dd0-9648-9ef68fbf7d18" path="/var/lib/kubelet/pods/97761d43-e004-4dd0-9648-9ef68fbf7d18/volumes" Oct 14 13:43:41 crc kubenswrapper[4725]: I1014 13:43:41.936727 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bacf7fe9-2e8f-4909-9761-e73e78ec0008" path="/var/lib/kubelet/pods/bacf7fe9-2e8f-4909-9761-e73e78ec0008/volumes" Oct 14 13:43:46 crc kubenswrapper[4725]: I1014 13:43:46.921127 4725 scope.go:117] "RemoveContainer" containerID="6f767896ac68228fe09840fae7ddd8f8c1d269f215009c642a1baacf3a7849d4" Oct 14 13:43:46 crc kubenswrapper[4725]: E1014 13:43:46.922080 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:43:47 crc kubenswrapper[4725]: I1014 13:43:47.140567 4725 scope.go:117] "RemoveContainer" containerID="f3ae748cd67304102fa70e79a35a19fab53a8e568b01e0cba3808f45c1d7d2d7" Oct 14 13:43:47 crc kubenswrapper[4725]: I1014 13:43:47.203436 4725 scope.go:117] "RemoveContainer" containerID="30e6b0ad0cebba6a776f105c671a35f291f4ee3ccf91698f91eb0e58bac12c3c" Oct 14 13:43:47 crc kubenswrapper[4725]: I1014 13:43:47.260225 4725 scope.go:117] "RemoveContainer" containerID="fd312f0d0c681b4221491063340d0d4031a50c89bad324a7bd357344f15051b3" Oct 14 13:43:47 crc kubenswrapper[4725]: I1014 13:43:47.324139 4725 scope.go:117] "RemoveContainer" containerID="ba31362afcca98cb2af19bc9c3b44949a8b5c6a7647abc6180dd74e41a0046ef" Oct 14 13:43:47 crc kubenswrapper[4725]: I1014 13:43:47.364351 4725 scope.go:117] "RemoveContainer" containerID="e0dcf1caeb71663ee972c6f823679124c497c498e4d1b99091b1fb84b43bfee0" Oct 14 13:43:47 crc kubenswrapper[4725]: I1014 13:43:47.423043 4725 scope.go:117] "RemoveContainer" containerID="9de7d1d9280f5273ba36dc02ca1461284edcba1bfbe3ccada9299d9b936c41d4" Oct 14 13:43:48 crc kubenswrapper[4725]: I1014 13:43:48.027369 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-7264-account-create-4skdt"] Oct 14 13:43:48 crc kubenswrapper[4725]: I1014 13:43:48.037377 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-7264-account-create-4skdt"] Oct 14 13:43:49 crc kubenswrapper[4725]: I1014 13:43:49.939388 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c596e70-a030-4573-9cbe-3782c7c02fea" path="/var/lib/kubelet/pods/8c596e70-a030-4573-9cbe-3782c7c02fea/volumes" Oct 14 13:43:58 crc kubenswrapper[4725]: I1014 13:43:58.036822 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7fb6-account-create-xdc48"] Oct 14 13:43:58 crc kubenswrapper[4725]: I1014 13:43:58.049346 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-6745-account-create-kchwn"] Oct 14 13:43:58 crc kubenswrapper[4725]: I1014 13:43:58.059404 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-7fb6-account-create-xdc48"] Oct 14 13:43:58 crc kubenswrapper[4725]: I1014 13:43:58.069383 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-6745-account-create-kchwn"] Oct 14 13:43:59 crc kubenswrapper[4725]: I1014 13:43:59.937427 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e65b9ca-75fc-422b-bd39-55be57dd727f" path="/var/lib/kubelet/pods/9e65b9ca-75fc-422b-bd39-55be57dd727f/volumes" Oct 14 13:43:59 crc kubenswrapper[4725]: I1014 13:43:59.938470 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1eaa22a-4372-40ab-be9f-6e1c845407dc" path="/var/lib/kubelet/pods/f1eaa22a-4372-40ab-be9f-6e1c845407dc/volumes" Oct 14 13:44:00 crc kubenswrapper[4725]: I1014 13:44:00.921775 4725 scope.go:117] "RemoveContainer" containerID="6f767896ac68228fe09840fae7ddd8f8c1d269f215009c642a1baacf3a7849d4" Oct 14 13:44:00 crc kubenswrapper[4725]: E1014 13:44:00.922657 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:44:11 crc kubenswrapper[4725]: I1014 13:44:11.921924 4725 scope.go:117] "RemoveContainer" containerID="6f767896ac68228fe09840fae7ddd8f8c1d269f215009c642a1baacf3a7849d4" Oct 14 13:44:11 crc kubenswrapper[4725]: E1014 13:44:11.922894 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:44:15 crc kubenswrapper[4725]: I1014 13:44:15.963905 4725 generic.go:334] "Generic (PLEG): container finished" podID="8b1732ba-bcae-4c5f-96d2-230b650b8552" containerID="2b72a4407ef7c62849df0305f30dca823b9ba35c7d5d059f9b28e7fda85296b0" exitCode=0 Oct 14 13:44:15 crc kubenswrapper[4725]: I1014 13:44:15.963992 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h7bvm" event={"ID":"8b1732ba-bcae-4c5f-96d2-230b650b8552","Type":"ContainerDied","Data":"2b72a4407ef7c62849df0305f30dca823b9ba35c7d5d059f9b28e7fda85296b0"} Oct 14 13:44:16 crc kubenswrapper[4725]: I1014 13:44:16.394206 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t9gr7"] Oct 14 13:44:16 crc kubenswrapper[4725]: I1014 13:44:16.396072 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t9gr7" Oct 14 13:44:16 crc kubenswrapper[4725]: I1014 13:44:16.408641 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t9gr7"] Oct 14 13:44:16 crc kubenswrapper[4725]: I1014 13:44:16.533694 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggxtj\" (UniqueName: \"kubernetes.io/projected/977de268-c914-4408-9db7-ed16c1bb05b2-kube-api-access-ggxtj\") pod \"certified-operators-t9gr7\" (UID: \"977de268-c914-4408-9db7-ed16c1bb05b2\") " pod="openshift-marketplace/certified-operators-t9gr7" Oct 14 13:44:16 crc kubenswrapper[4725]: I1014 13:44:16.533769 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/977de268-c914-4408-9db7-ed16c1bb05b2-catalog-content\") pod \"certified-operators-t9gr7\" (UID: \"977de268-c914-4408-9db7-ed16c1bb05b2\") " pod="openshift-marketplace/certified-operators-t9gr7" Oct 14 13:44:16 crc kubenswrapper[4725]: I1014 13:44:16.533819 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/977de268-c914-4408-9db7-ed16c1bb05b2-utilities\") pod \"certified-operators-t9gr7\" (UID: \"977de268-c914-4408-9db7-ed16c1bb05b2\") " pod="openshift-marketplace/certified-operators-t9gr7" Oct 14 13:44:16 crc kubenswrapper[4725]: I1014 13:44:16.635664 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggxtj\" (UniqueName: \"kubernetes.io/projected/977de268-c914-4408-9db7-ed16c1bb05b2-kube-api-access-ggxtj\") pod \"certified-operators-t9gr7\" (UID: \"977de268-c914-4408-9db7-ed16c1bb05b2\") " pod="openshift-marketplace/certified-operators-t9gr7" Oct 14 13:44:16 crc kubenswrapper[4725]: I1014 13:44:16.635742 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/977de268-c914-4408-9db7-ed16c1bb05b2-catalog-content\") pod \"certified-operators-t9gr7\" (UID: \"977de268-c914-4408-9db7-ed16c1bb05b2\") " pod="openshift-marketplace/certified-operators-t9gr7" Oct 14 13:44:16 crc kubenswrapper[4725]: I1014 13:44:16.635789 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/977de268-c914-4408-9db7-ed16c1bb05b2-utilities\") pod \"certified-operators-t9gr7\" (UID: \"977de268-c914-4408-9db7-ed16c1bb05b2\") " pod="openshift-marketplace/certified-operators-t9gr7" Oct 14 13:44:16 crc kubenswrapper[4725]: I1014 13:44:16.636288 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/977de268-c914-4408-9db7-ed16c1bb05b2-catalog-content\") pod \"certified-operators-t9gr7\" (UID: \"977de268-c914-4408-9db7-ed16c1bb05b2\") " pod="openshift-marketplace/certified-operators-t9gr7" Oct 14 13:44:16 crc kubenswrapper[4725]: I1014 13:44:16.636391 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/977de268-c914-4408-9db7-ed16c1bb05b2-utilities\") pod \"certified-operators-t9gr7\" (UID: \"977de268-c914-4408-9db7-ed16c1bb05b2\") " pod="openshift-marketplace/certified-operators-t9gr7" Oct 14 13:44:16 crc kubenswrapper[4725]: I1014 13:44:16.665589 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggxtj\" (UniqueName: \"kubernetes.io/projected/977de268-c914-4408-9db7-ed16c1bb05b2-kube-api-access-ggxtj\") pod \"certified-operators-t9gr7\" (UID: \"977de268-c914-4408-9db7-ed16c1bb05b2\") " pod="openshift-marketplace/certified-operators-t9gr7" Oct 14 13:44:16 crc kubenswrapper[4725]: I1014 13:44:16.729517 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t9gr7" Oct 14 13:44:17 crc kubenswrapper[4725]: I1014 13:44:17.324373 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t9gr7"] Oct 14 13:44:17 crc kubenswrapper[4725]: I1014 13:44:17.533419 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h7bvm" Oct 14 13:44:17 crc kubenswrapper[4725]: I1014 13:44:17.663119 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8b1732ba-bcae-4c5f-96d2-230b650b8552-ssh-key\") pod \"8b1732ba-bcae-4c5f-96d2-230b650b8552\" (UID: \"8b1732ba-bcae-4c5f-96d2-230b650b8552\") " Oct 14 13:44:17 crc kubenswrapper[4725]: I1014 13:44:17.663235 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9x4k\" (UniqueName: \"kubernetes.io/projected/8b1732ba-bcae-4c5f-96d2-230b650b8552-kube-api-access-x9x4k\") pod \"8b1732ba-bcae-4c5f-96d2-230b650b8552\" (UID: \"8b1732ba-bcae-4c5f-96d2-230b650b8552\") " Oct 14 13:44:17 crc kubenswrapper[4725]: I1014 13:44:17.663263 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b1732ba-bcae-4c5f-96d2-230b650b8552-inventory\") pod \"8b1732ba-bcae-4c5f-96d2-230b650b8552\" (UID: \"8b1732ba-bcae-4c5f-96d2-230b650b8552\") " Oct 14 13:44:17 crc kubenswrapper[4725]: I1014 13:44:17.670050 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b1732ba-bcae-4c5f-96d2-230b650b8552-kube-api-access-x9x4k" (OuterVolumeSpecName: "kube-api-access-x9x4k") pod "8b1732ba-bcae-4c5f-96d2-230b650b8552" (UID: "8b1732ba-bcae-4c5f-96d2-230b650b8552"). InnerVolumeSpecName "kube-api-access-x9x4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:44:17 crc kubenswrapper[4725]: I1014 13:44:17.697012 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1732ba-bcae-4c5f-96d2-230b650b8552-inventory" (OuterVolumeSpecName: "inventory") pod "8b1732ba-bcae-4c5f-96d2-230b650b8552" (UID: "8b1732ba-bcae-4c5f-96d2-230b650b8552"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:44:17 crc kubenswrapper[4725]: I1014 13:44:17.715298 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1732ba-bcae-4c5f-96d2-230b650b8552-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8b1732ba-bcae-4c5f-96d2-230b650b8552" (UID: "8b1732ba-bcae-4c5f-96d2-230b650b8552"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:44:17 crc kubenswrapper[4725]: I1014 13:44:17.766586 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8b1732ba-bcae-4c5f-96d2-230b650b8552-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 13:44:17 crc kubenswrapper[4725]: I1014 13:44:17.766616 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9x4k\" (UniqueName: \"kubernetes.io/projected/8b1732ba-bcae-4c5f-96d2-230b650b8552-kube-api-access-x9x4k\") on node \"crc\" DevicePath \"\"" Oct 14 13:44:17 crc kubenswrapper[4725]: I1014 13:44:17.766625 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b1732ba-bcae-4c5f-96d2-230b650b8552-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 13:44:17 crc kubenswrapper[4725]: I1014 13:44:17.998702 4725 generic.go:334] "Generic (PLEG): container finished" podID="977de268-c914-4408-9db7-ed16c1bb05b2" containerID="eb6e695de3a0bad5d221c046062bcaaf98e8cda0b7ba3c9b016ebfdd4b3f1aed" exitCode=0 Oct 14 13:44:17 crc kubenswrapper[4725]: I1014 13:44:17.998824 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9gr7" event={"ID":"977de268-c914-4408-9db7-ed16c1bb05b2","Type":"ContainerDied","Data":"eb6e695de3a0bad5d221c046062bcaaf98e8cda0b7ba3c9b016ebfdd4b3f1aed"} Oct 14 13:44:17 crc kubenswrapper[4725]: I1014 13:44:17.998868 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9gr7" event={"ID":"977de268-c914-4408-9db7-ed16c1bb05b2","Type":"ContainerStarted","Data":"1d101e622ec86786d09d464e79b20027a02efee8f235d1ca747774e406b4ddd9"} Oct 14 13:44:18 crc kubenswrapper[4725]: I1014 13:44:18.001025 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h7bvm" event={"ID":"8b1732ba-bcae-4c5f-96d2-230b650b8552","Type":"ContainerDied","Data":"e794291ce325e196de71e697cc295931b6548a1a4c304c0c64e5876e49195fe6"} Oct 14 13:44:18 crc kubenswrapper[4725]: I1014 13:44:18.001351 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e794291ce325e196de71e697cc295931b6548a1a4c304c0c64e5876e49195fe6" Oct 14 13:44:18 crc kubenswrapper[4725]: I1014 13:44:18.001072 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-h7bvm" Oct 14 13:44:18 crc kubenswrapper[4725]: I1014 13:44:18.140171 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnw6x"] Oct 14 13:44:18 crc kubenswrapper[4725]: E1014 13:44:18.140733 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b1732ba-bcae-4c5f-96d2-230b650b8552" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 14 13:44:18 crc kubenswrapper[4725]: I1014 13:44:18.140799 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1732ba-bcae-4c5f-96d2-230b650b8552" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 14 13:44:18 crc kubenswrapper[4725]: I1014 13:44:18.141018 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b1732ba-bcae-4c5f-96d2-230b650b8552" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 14 13:44:18 crc kubenswrapper[4725]: I1014 13:44:18.141870 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnw6x" Oct 14 13:44:18 crc kubenswrapper[4725]: I1014 13:44:18.144566 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qnvbv" Oct 14 13:44:18 crc kubenswrapper[4725]: I1014 13:44:18.145582 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 13:44:18 crc kubenswrapper[4725]: I1014 13:44:18.145854 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 13:44:18 crc kubenswrapper[4725]: I1014 13:44:18.145929 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 13:44:18 crc kubenswrapper[4725]: I1014 13:44:18.161864 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnw6x"] Oct 14 13:44:18 crc kubenswrapper[4725]: I1014 13:44:18.278153 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d261a991-7df5-4a6b-981d-b191a3d4702b-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mnw6x\" (UID: \"d261a991-7df5-4a6b-981d-b191a3d4702b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnw6x" Oct 14 13:44:18 crc kubenswrapper[4725]: I1014 13:44:18.278312 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkhdt\" (UniqueName: \"kubernetes.io/projected/d261a991-7df5-4a6b-981d-b191a3d4702b-kube-api-access-qkhdt\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mnw6x\" (UID: \"d261a991-7df5-4a6b-981d-b191a3d4702b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnw6x" Oct 14 13:44:18 crc kubenswrapper[4725]: I1014 13:44:18.278350 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d261a991-7df5-4a6b-981d-b191a3d4702b-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mnw6x\" (UID: \"d261a991-7df5-4a6b-981d-b191a3d4702b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnw6x" Oct 14 13:44:18 crc kubenswrapper[4725]: I1014 13:44:18.379946 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkhdt\" (UniqueName: \"kubernetes.io/projected/d261a991-7df5-4a6b-981d-b191a3d4702b-kube-api-access-qkhdt\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mnw6x\" (UID: \"d261a991-7df5-4a6b-981d-b191a3d4702b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnw6x" Oct 14 13:44:18 crc kubenswrapper[4725]: I1014 13:44:18.380009 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d261a991-7df5-4a6b-981d-b191a3d4702b-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mnw6x\" (UID: \"d261a991-7df5-4a6b-981d-b191a3d4702b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnw6x" Oct 14 13:44:18 crc kubenswrapper[4725]: I1014 13:44:18.380094 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d261a991-7df5-4a6b-981d-b191a3d4702b-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mnw6x\" (UID: \"d261a991-7df5-4a6b-981d-b191a3d4702b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnw6x" Oct 14 13:44:18 crc kubenswrapper[4725]: I1014 13:44:18.384519 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d261a991-7df5-4a6b-981d-b191a3d4702b-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mnw6x\" (UID: \"d261a991-7df5-4a6b-981d-b191a3d4702b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnw6x" Oct 14 13:44:18 crc kubenswrapper[4725]: I1014 13:44:18.384615 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d261a991-7df5-4a6b-981d-b191a3d4702b-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mnw6x\" (UID: \"d261a991-7df5-4a6b-981d-b191a3d4702b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnw6x" Oct 14 13:44:18 crc kubenswrapper[4725]: I1014 13:44:18.408365 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkhdt\" (UniqueName: \"kubernetes.io/projected/d261a991-7df5-4a6b-981d-b191a3d4702b-kube-api-access-qkhdt\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mnw6x\" (UID: \"d261a991-7df5-4a6b-981d-b191a3d4702b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnw6x" Oct 14 13:44:18 crc kubenswrapper[4725]: I1014 13:44:18.477534 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnw6x" Oct 14 13:44:19 crc kubenswrapper[4725]: I1014 13:44:19.001367 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnw6x"] Oct 14 13:44:19 crc kubenswrapper[4725]: I1014 13:44:19.019088 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9gr7" event={"ID":"977de268-c914-4408-9db7-ed16c1bb05b2","Type":"ContainerStarted","Data":"ec5a8f9e2ceabcbdf9eab0a77907ff3de31a0e71fa2f7651397f744f8fba254c"} Oct 14 13:44:19 crc kubenswrapper[4725]: I1014 13:44:19.066761 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mdw9d"] Oct 14 13:44:19 crc kubenswrapper[4725]: I1014 13:44:19.075015 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mdw9d"] Oct 14 13:44:19 crc kubenswrapper[4725]: I1014 13:44:19.933501 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71b4602e-fc28-44d1-986e-dad1133397c1" path="/var/lib/kubelet/pods/71b4602e-fc28-44d1-986e-dad1133397c1/volumes" Oct 14 13:44:20 crc kubenswrapper[4725]: I1014 13:44:20.033950 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnw6x" event={"ID":"d261a991-7df5-4a6b-981d-b191a3d4702b","Type":"ContainerStarted","Data":"186fe5a9bafa6bb9f88035a73bbfeab80c893f3b94903f079c06ba5e46969ff6"} Oct 14 13:44:20 crc kubenswrapper[4725]: I1014 13:44:20.034002 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnw6x" event={"ID":"d261a991-7df5-4a6b-981d-b191a3d4702b","Type":"ContainerStarted","Data":"db89c6b32f768667076492e5a99e01cdeb2d01abb3050f2e9c468152a024d63e"} Oct 14 13:44:20 crc kubenswrapper[4725]: I1014 13:44:20.045738 4725 generic.go:334] "Generic (PLEG): container finished" podID="977de268-c914-4408-9db7-ed16c1bb05b2" containerID="ec5a8f9e2ceabcbdf9eab0a77907ff3de31a0e71fa2f7651397f744f8fba254c" exitCode=0 Oct 14 13:44:20 crc kubenswrapper[4725]: I1014 13:44:20.045800 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9gr7" event={"ID":"977de268-c914-4408-9db7-ed16c1bb05b2","Type":"ContainerDied","Data":"ec5a8f9e2ceabcbdf9eab0a77907ff3de31a0e71fa2f7651397f744f8fba254c"} Oct 14 13:44:20 crc kubenswrapper[4725]: I1014 13:44:20.064338 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnw6x" podStartSLOduration=1.663972622 podStartE2EDuration="2.064313626s" podCreationTimestamp="2025-10-14 13:44:18 +0000 UTC" firstStartedPulling="2025-10-14 13:44:19.007041599 +0000 UTC m=+1775.855476408" lastFinishedPulling="2025-10-14 13:44:19.407382593 +0000 UTC m=+1776.255817412" observedRunningTime="2025-10-14 13:44:20.052398078 +0000 UTC m=+1776.900832887" watchObservedRunningTime="2025-10-14 13:44:20.064313626 +0000 UTC m=+1776.912748445" Oct 14 13:44:21 crc kubenswrapper[4725]: I1014 13:44:21.057247 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9gr7" event={"ID":"977de268-c914-4408-9db7-ed16c1bb05b2","Type":"ContainerStarted","Data":"e42786e748f399f5d75c099772d75b7fc1c8b427f4d1f82725f1115aaa3bffb4"} Oct 14 13:44:21 crc kubenswrapper[4725]: I1014 13:44:21.094380 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t9gr7" podStartSLOduration=2.486577924 podStartE2EDuration="5.094354393s" podCreationTimestamp="2025-10-14 13:44:16 +0000 UTC" firstStartedPulling="2025-10-14 13:44:18.002974224 +0000 UTC m=+1774.851409043" lastFinishedPulling="2025-10-14 13:44:20.610750693 +0000 UTC m=+1777.459185512" observedRunningTime="2025-10-14 13:44:21.090682763 +0000 UTC m=+1777.939117592" watchObservedRunningTime="2025-10-14 13:44:21.094354393 +0000 UTC m=+1777.942789212" Oct 14 13:44:25 crc kubenswrapper[4725]: I1014 13:44:25.921711 4725 scope.go:117] "RemoveContainer" containerID="6f767896ac68228fe09840fae7ddd8f8c1d269f215009c642a1baacf3a7849d4" Oct 14 13:44:25 crc kubenswrapper[4725]: E1014 13:44:25.922610 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:44:26 crc kubenswrapper[4725]: I1014 13:44:26.729668 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t9gr7" Oct 14 13:44:26 crc kubenswrapper[4725]: I1014 13:44:26.729732 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t9gr7" Oct 14 13:44:26 crc kubenswrapper[4725]: I1014 13:44:26.794168 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t9gr7" Oct 14 13:44:27 crc kubenswrapper[4725]: I1014 13:44:27.187925 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t9gr7" Oct 14 13:44:27 crc kubenswrapper[4725]: I1014 13:44:27.238546 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t9gr7"] Oct 14 13:44:29 crc kubenswrapper[4725]: I1014 13:44:29.127848 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t9gr7" podUID="977de268-c914-4408-9db7-ed16c1bb05b2" containerName="registry-server" containerID="cri-o://e42786e748f399f5d75c099772d75b7fc1c8b427f4d1f82725f1115aaa3bffb4" gracePeriod=2 Oct 14 13:44:29 crc kubenswrapper[4725]: E1014 13:44:29.249479 4725 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod977de268_c914_4408_9db7_ed16c1bb05b2.slice/crio-e42786e748f399f5d75c099772d75b7fc1c8b427f4d1f82725f1115aaa3bffb4.scope\": RecentStats: unable to find data in memory cache]" Oct 14 13:44:29 crc kubenswrapper[4725]: I1014 13:44:29.612154 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t9gr7" Oct 14 13:44:29 crc kubenswrapper[4725]: I1014 13:44:29.707643 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggxtj\" (UniqueName: \"kubernetes.io/projected/977de268-c914-4408-9db7-ed16c1bb05b2-kube-api-access-ggxtj\") pod \"977de268-c914-4408-9db7-ed16c1bb05b2\" (UID: \"977de268-c914-4408-9db7-ed16c1bb05b2\") " Oct 14 13:44:29 crc kubenswrapper[4725]: I1014 13:44:29.708340 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/977de268-c914-4408-9db7-ed16c1bb05b2-catalog-content\") pod \"977de268-c914-4408-9db7-ed16c1bb05b2\" (UID: \"977de268-c914-4408-9db7-ed16c1bb05b2\") " Oct 14 13:44:29 crc kubenswrapper[4725]: I1014 13:44:29.708422 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/977de268-c914-4408-9db7-ed16c1bb05b2-utilities\") pod \"977de268-c914-4408-9db7-ed16c1bb05b2\" (UID: \"977de268-c914-4408-9db7-ed16c1bb05b2\") " Oct 14 13:44:29 crc kubenswrapper[4725]: I1014 13:44:29.709153 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/977de268-c914-4408-9db7-ed16c1bb05b2-utilities" (OuterVolumeSpecName: "utilities") pod "977de268-c914-4408-9db7-ed16c1bb05b2" (UID: "977de268-c914-4408-9db7-ed16c1bb05b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:44:29 crc kubenswrapper[4725]: I1014 13:44:29.724377 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/977de268-c914-4408-9db7-ed16c1bb05b2-kube-api-access-ggxtj" (OuterVolumeSpecName: "kube-api-access-ggxtj") pod "977de268-c914-4408-9db7-ed16c1bb05b2" (UID: "977de268-c914-4408-9db7-ed16c1bb05b2"). InnerVolumeSpecName "kube-api-access-ggxtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:44:29 crc kubenswrapper[4725]: I1014 13:44:29.766565 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/977de268-c914-4408-9db7-ed16c1bb05b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "977de268-c914-4408-9db7-ed16c1bb05b2" (UID: "977de268-c914-4408-9db7-ed16c1bb05b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:44:29 crc kubenswrapper[4725]: I1014 13:44:29.810913 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/977de268-c914-4408-9db7-ed16c1bb05b2-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 13:44:29 crc kubenswrapper[4725]: I1014 13:44:29.810941 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggxtj\" (UniqueName: \"kubernetes.io/projected/977de268-c914-4408-9db7-ed16c1bb05b2-kube-api-access-ggxtj\") on node \"crc\" DevicePath \"\"" Oct 14 13:44:29 crc kubenswrapper[4725]: I1014 13:44:29.810952 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/977de268-c914-4408-9db7-ed16c1bb05b2-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 13:44:30 crc kubenswrapper[4725]: I1014 13:44:30.138405 4725 generic.go:334] "Generic (PLEG): container finished" podID="977de268-c914-4408-9db7-ed16c1bb05b2" containerID="e42786e748f399f5d75c099772d75b7fc1c8b427f4d1f82725f1115aaa3bffb4" exitCode=0 Oct 14 13:44:30 crc kubenswrapper[4725]: I1014 13:44:30.138459 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9gr7" event={"ID":"977de268-c914-4408-9db7-ed16c1bb05b2","Type":"ContainerDied","Data":"e42786e748f399f5d75c099772d75b7fc1c8b427f4d1f82725f1115aaa3bffb4"} Oct 14 13:44:30 crc kubenswrapper[4725]: I1014 13:44:30.138494 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9gr7" event={"ID":"977de268-c914-4408-9db7-ed16c1bb05b2","Type":"ContainerDied","Data":"1d101e622ec86786d09d464e79b20027a02efee8f235d1ca747774e406b4ddd9"} Oct 14 13:44:30 crc kubenswrapper[4725]: I1014 13:44:30.138510 4725 scope.go:117] "RemoveContainer" containerID="e42786e748f399f5d75c099772d75b7fc1c8b427f4d1f82725f1115aaa3bffb4" Oct 14 13:44:30 crc kubenswrapper[4725]: I1014 13:44:30.138510 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t9gr7" Oct 14 13:44:30 crc kubenswrapper[4725]: I1014 13:44:30.163356 4725 scope.go:117] "RemoveContainer" containerID="ec5a8f9e2ceabcbdf9eab0a77907ff3de31a0e71fa2f7651397f744f8fba254c" Oct 14 13:44:30 crc kubenswrapper[4725]: I1014 13:44:30.167775 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t9gr7"] Oct 14 13:44:30 crc kubenswrapper[4725]: I1014 13:44:30.176661 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t9gr7"] Oct 14 13:44:30 crc kubenswrapper[4725]: I1014 13:44:30.202023 4725 scope.go:117] "RemoveContainer" containerID="eb6e695de3a0bad5d221c046062bcaaf98e8cda0b7ba3c9b016ebfdd4b3f1aed" Oct 14 13:44:30 crc kubenswrapper[4725]: I1014 13:44:30.241843 4725 scope.go:117] "RemoveContainer" containerID="e42786e748f399f5d75c099772d75b7fc1c8b427f4d1f82725f1115aaa3bffb4" Oct 14 13:44:30 crc kubenswrapper[4725]: E1014 13:44:30.242332 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e42786e748f399f5d75c099772d75b7fc1c8b427f4d1f82725f1115aaa3bffb4\": container with ID starting with e42786e748f399f5d75c099772d75b7fc1c8b427f4d1f82725f1115aaa3bffb4 not found: ID does not exist" containerID="e42786e748f399f5d75c099772d75b7fc1c8b427f4d1f82725f1115aaa3bffb4" Oct 14 13:44:30 crc kubenswrapper[4725]: I1014 13:44:30.242389 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e42786e748f399f5d75c099772d75b7fc1c8b427f4d1f82725f1115aaa3bffb4"} err="failed to get container status \"e42786e748f399f5d75c099772d75b7fc1c8b427f4d1f82725f1115aaa3bffb4\": rpc error: code = NotFound desc = could not find container \"e42786e748f399f5d75c099772d75b7fc1c8b427f4d1f82725f1115aaa3bffb4\": container with ID starting with e42786e748f399f5d75c099772d75b7fc1c8b427f4d1f82725f1115aaa3bffb4 not found: ID does not exist" Oct 14 13:44:30 crc kubenswrapper[4725]: I1014 13:44:30.242418 4725 scope.go:117] "RemoveContainer" containerID="ec5a8f9e2ceabcbdf9eab0a77907ff3de31a0e71fa2f7651397f744f8fba254c" Oct 14 13:44:30 crc kubenswrapper[4725]: E1014 13:44:30.242790 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec5a8f9e2ceabcbdf9eab0a77907ff3de31a0e71fa2f7651397f744f8fba254c\": container with ID starting with ec5a8f9e2ceabcbdf9eab0a77907ff3de31a0e71fa2f7651397f744f8fba254c not found: ID does not exist" containerID="ec5a8f9e2ceabcbdf9eab0a77907ff3de31a0e71fa2f7651397f744f8fba254c" Oct 14 13:44:30 crc kubenswrapper[4725]: I1014 13:44:30.242819 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec5a8f9e2ceabcbdf9eab0a77907ff3de31a0e71fa2f7651397f744f8fba254c"} err="failed to get container status \"ec5a8f9e2ceabcbdf9eab0a77907ff3de31a0e71fa2f7651397f744f8fba254c\": rpc error: code = NotFound desc = could not find container \"ec5a8f9e2ceabcbdf9eab0a77907ff3de31a0e71fa2f7651397f744f8fba254c\": container with ID starting with ec5a8f9e2ceabcbdf9eab0a77907ff3de31a0e71fa2f7651397f744f8fba254c not found: ID does not exist" Oct 14 13:44:30 crc kubenswrapper[4725]: I1014 13:44:30.242835 4725 scope.go:117] "RemoveContainer" containerID="eb6e695de3a0bad5d221c046062bcaaf98e8cda0b7ba3c9b016ebfdd4b3f1aed" Oct 14 13:44:30 crc kubenswrapper[4725]: E1014 13:44:30.243284 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb6e695de3a0bad5d221c046062bcaaf98e8cda0b7ba3c9b016ebfdd4b3f1aed\": container with ID starting with eb6e695de3a0bad5d221c046062bcaaf98e8cda0b7ba3c9b016ebfdd4b3f1aed not found: ID does not exist" containerID="eb6e695de3a0bad5d221c046062bcaaf98e8cda0b7ba3c9b016ebfdd4b3f1aed" Oct 14 13:44:30 crc kubenswrapper[4725]: I1014 13:44:30.243343 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb6e695de3a0bad5d221c046062bcaaf98e8cda0b7ba3c9b016ebfdd4b3f1aed"} err="failed to get container status \"eb6e695de3a0bad5d221c046062bcaaf98e8cda0b7ba3c9b016ebfdd4b3f1aed\": rpc error: code = NotFound desc = could not find container \"eb6e695de3a0bad5d221c046062bcaaf98e8cda0b7ba3c9b016ebfdd4b3f1aed\": container with ID starting with eb6e695de3a0bad5d221c046062bcaaf98e8cda0b7ba3c9b016ebfdd4b3f1aed not found: ID does not exist" Oct 14 13:44:31 crc kubenswrapper[4725]: I1014 13:44:31.936873 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="977de268-c914-4408-9db7-ed16c1bb05b2" path="/var/lib/kubelet/pods/977de268-c914-4408-9db7-ed16c1bb05b2/volumes" Oct 14 13:44:37 crc kubenswrapper[4725]: I1014 13:44:37.046643 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-8nbmk"] Oct 14 13:44:37 crc kubenswrapper[4725]: I1014 13:44:37.054879 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-8nbmk"] Oct 14 13:44:37 crc kubenswrapper[4725]: I1014 13:44:37.934960 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83d1426f-ea53-45ba-a397-e03f35b28711" path="/var/lib/kubelet/pods/83d1426f-ea53-45ba-a397-e03f35b28711/volumes" Oct 14 13:44:38 crc kubenswrapper[4725]: I1014 13:44:38.921548 4725 scope.go:117] "RemoveContainer" containerID="6f767896ac68228fe09840fae7ddd8f8c1d269f215009c642a1baacf3a7849d4" Oct 14 13:44:38 crc kubenswrapper[4725]: E1014 13:44:38.922088 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:44:39 crc kubenswrapper[4725]: I1014 13:44:39.032967 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dgd9g"] Oct 14 13:44:39 crc kubenswrapper[4725]: I1014 13:44:39.040267 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dgd9g"] Oct 14 13:44:39 crc kubenswrapper[4725]: I1014 13:44:39.930697 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfa8d069-e628-42b1-a2c3-f099375ffff3" path="/var/lib/kubelet/pods/bfa8d069-e628-42b1-a2c3-f099375ffff3/volumes" Oct 14 13:44:47 crc kubenswrapper[4725]: I1014 13:44:47.591197 4725 scope.go:117] "RemoveContainer" containerID="65bc1a55c871dcd6600774fecec4a96d1809e8d6e18f399f9283898b8bea6736" Oct 14 13:44:47 crc kubenswrapper[4725]: I1014 13:44:47.632584 4725 scope.go:117] "RemoveContainer" containerID="235ad7f53874c346b76f4fe0897a938719c662e7e12e0180127752edd4d93294" Oct 14 13:44:47 crc kubenswrapper[4725]: I1014 13:44:47.711715 4725 scope.go:117] "RemoveContainer" containerID="7e401f768f83790b17644e75cdcd6d9420658ebfe8d9aa964ec1970420f3a4a5" Oct 14 13:44:47 crc kubenswrapper[4725]: I1014 13:44:47.785708 4725 scope.go:117] "RemoveContainer" containerID="bbb0e62b1056d3930e3ca49ec5814dbd5caaf13a2c466858a065aec89fef9b5e" Oct 14 13:44:47 crc kubenswrapper[4725]: I1014 13:44:47.816723 4725 scope.go:117] "RemoveContainer" containerID="5803a7182b99075f418b8ab57c8d00430a74a5361a003e3ab534e1801fd3e228" Oct 14 13:44:47 crc kubenswrapper[4725]: I1014 13:44:47.894435 4725 scope.go:117] "RemoveContainer" containerID="020c7744062f32160dc40a4a307a183c598276f990787c6fdd2039dbccc46b1c" Oct 14 13:44:53 crc kubenswrapper[4725]: I1014 13:44:53.921958 4725 scope.go:117] "RemoveContainer" containerID="6f767896ac68228fe09840fae7ddd8f8c1d269f215009c642a1baacf3a7849d4" Oct 14 13:44:53 crc kubenswrapper[4725]: E1014 13:44:53.922911 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:45:00 crc kubenswrapper[4725]: I1014 13:45:00.147220 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340825-z84z6"] Oct 14 13:45:00 crc kubenswrapper[4725]: E1014 13:45:00.148087 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="977de268-c914-4408-9db7-ed16c1bb05b2" containerName="registry-server" Oct 14 13:45:00 crc kubenswrapper[4725]: I1014 13:45:00.148102 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="977de268-c914-4408-9db7-ed16c1bb05b2" containerName="registry-server" Oct 14 13:45:00 crc kubenswrapper[4725]: E1014 13:45:00.148111 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="977de268-c914-4408-9db7-ed16c1bb05b2" containerName="extract-content" Oct 14 13:45:00 crc kubenswrapper[4725]: I1014 13:45:00.148117 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="977de268-c914-4408-9db7-ed16c1bb05b2" containerName="extract-content" Oct 14 13:45:00 crc kubenswrapper[4725]: E1014 13:45:00.148130 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="977de268-c914-4408-9db7-ed16c1bb05b2" containerName="extract-utilities" Oct 14 13:45:00 crc kubenswrapper[4725]: I1014 13:45:00.148138 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="977de268-c914-4408-9db7-ed16c1bb05b2" containerName="extract-utilities" Oct 14 13:45:00 crc kubenswrapper[4725]: I1014 13:45:00.148364 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="977de268-c914-4408-9db7-ed16c1bb05b2" containerName="registry-server" Oct 14 13:45:00 crc kubenswrapper[4725]: I1014 13:45:00.149008 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340825-z84z6" Oct 14 13:45:00 crc kubenswrapper[4725]: I1014 13:45:00.151520 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 13:45:00 crc kubenswrapper[4725]: I1014 13:45:00.152149 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 13:45:00 crc kubenswrapper[4725]: I1014 13:45:00.162377 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340825-z84z6"] Oct 14 13:45:00 crc kubenswrapper[4725]: I1014 13:45:00.251995 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c13fb15-c5d9-44a3-bb97-4222cce2b994-config-volume\") pod \"collect-profiles-29340825-z84z6\" (UID: \"7c13fb15-c5d9-44a3-bb97-4222cce2b994\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340825-z84z6" Oct 14 13:45:00 crc kubenswrapper[4725]: I1014 13:45:00.252769 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqst7\" (UniqueName: \"kubernetes.io/projected/7c13fb15-c5d9-44a3-bb97-4222cce2b994-kube-api-access-rqst7\") pod \"collect-profiles-29340825-z84z6\" (UID: \"7c13fb15-c5d9-44a3-bb97-4222cce2b994\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340825-z84z6" Oct 14 13:45:00 crc kubenswrapper[4725]: I1014 13:45:00.252823 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7c13fb15-c5d9-44a3-bb97-4222cce2b994-secret-volume\") pod \"collect-profiles-29340825-z84z6\" (UID: \"7c13fb15-c5d9-44a3-bb97-4222cce2b994\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340825-z84z6" Oct 14 13:45:00 crc kubenswrapper[4725]: I1014 13:45:00.354018 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c13fb15-c5d9-44a3-bb97-4222cce2b994-config-volume\") pod \"collect-profiles-29340825-z84z6\" (UID: \"7c13fb15-c5d9-44a3-bb97-4222cce2b994\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340825-z84z6" Oct 14 13:45:00 crc kubenswrapper[4725]: I1014 13:45:00.354083 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqst7\" (UniqueName: \"kubernetes.io/projected/7c13fb15-c5d9-44a3-bb97-4222cce2b994-kube-api-access-rqst7\") pod \"collect-profiles-29340825-z84z6\" (UID: \"7c13fb15-c5d9-44a3-bb97-4222cce2b994\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340825-z84z6" Oct 14 13:45:00 crc kubenswrapper[4725]: I1014 13:45:00.354140 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7c13fb15-c5d9-44a3-bb97-4222cce2b994-secret-volume\") pod \"collect-profiles-29340825-z84z6\" (UID: \"7c13fb15-c5d9-44a3-bb97-4222cce2b994\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340825-z84z6" Oct 14 13:45:00 crc kubenswrapper[4725]: I1014 13:45:00.355038 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c13fb15-c5d9-44a3-bb97-4222cce2b994-config-volume\") pod \"collect-profiles-29340825-z84z6\" (UID: \"7c13fb15-c5d9-44a3-bb97-4222cce2b994\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340825-z84z6" Oct 14 13:45:00 crc kubenswrapper[4725]: I1014 13:45:00.368711 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7c13fb15-c5d9-44a3-bb97-4222cce2b994-secret-volume\") pod \"collect-profiles-29340825-z84z6\" (UID: \"7c13fb15-c5d9-44a3-bb97-4222cce2b994\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340825-z84z6" Oct 14 13:45:00 crc kubenswrapper[4725]: I1014 13:45:00.371262 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqst7\" (UniqueName: \"kubernetes.io/projected/7c13fb15-c5d9-44a3-bb97-4222cce2b994-kube-api-access-rqst7\") pod \"collect-profiles-29340825-z84z6\" (UID: \"7c13fb15-c5d9-44a3-bb97-4222cce2b994\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340825-z84z6" Oct 14 13:45:00 crc kubenswrapper[4725]: I1014 13:45:00.479333 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340825-z84z6" Oct 14 13:45:01 crc kubenswrapper[4725]: I1014 13:45:01.022685 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340825-z84z6"] Oct 14 13:45:01 crc kubenswrapper[4725]: I1014 13:45:01.453268 4725 generic.go:334] "Generic (PLEG): container finished" podID="7c13fb15-c5d9-44a3-bb97-4222cce2b994" containerID="25ab10b70a4e842ab88fe28550fd454b698fa513fffae7e73db16563b3e8f7a5" exitCode=0 Oct 14 13:45:01 crc kubenswrapper[4725]: I1014 13:45:01.453494 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340825-z84z6" event={"ID":"7c13fb15-c5d9-44a3-bb97-4222cce2b994","Type":"ContainerDied","Data":"25ab10b70a4e842ab88fe28550fd454b698fa513fffae7e73db16563b3e8f7a5"} Oct 14 13:45:01 crc kubenswrapper[4725]: I1014 13:45:01.453568 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340825-z84z6" event={"ID":"7c13fb15-c5d9-44a3-bb97-4222cce2b994","Type":"ContainerStarted","Data":"6cc809c4dabc2d36bf865f70a901046deb2dbb31d5cb6308b4942f67d60f9eb0"} Oct 14 13:45:02 crc kubenswrapper[4725]: I1014 13:45:02.814845 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340825-z84z6" Oct 14 13:45:02 crc kubenswrapper[4725]: I1014 13:45:02.910086 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7c13fb15-c5d9-44a3-bb97-4222cce2b994-secret-volume\") pod \"7c13fb15-c5d9-44a3-bb97-4222cce2b994\" (UID: \"7c13fb15-c5d9-44a3-bb97-4222cce2b994\") " Oct 14 13:45:02 crc kubenswrapper[4725]: I1014 13:45:02.910294 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c13fb15-c5d9-44a3-bb97-4222cce2b994-config-volume\") pod \"7c13fb15-c5d9-44a3-bb97-4222cce2b994\" (UID: \"7c13fb15-c5d9-44a3-bb97-4222cce2b994\") " Oct 14 13:45:02 crc kubenswrapper[4725]: I1014 13:45:02.910381 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqst7\" (UniqueName: \"kubernetes.io/projected/7c13fb15-c5d9-44a3-bb97-4222cce2b994-kube-api-access-rqst7\") pod \"7c13fb15-c5d9-44a3-bb97-4222cce2b994\" (UID: \"7c13fb15-c5d9-44a3-bb97-4222cce2b994\") " Oct 14 13:45:02 crc kubenswrapper[4725]: I1014 13:45:02.911051 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c13fb15-c5d9-44a3-bb97-4222cce2b994-config-volume" (OuterVolumeSpecName: "config-volume") pod "7c13fb15-c5d9-44a3-bb97-4222cce2b994" (UID: "7c13fb15-c5d9-44a3-bb97-4222cce2b994"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:45:02 crc kubenswrapper[4725]: I1014 13:45:02.911582 4725 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c13fb15-c5d9-44a3-bb97-4222cce2b994-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 13:45:02 crc kubenswrapper[4725]: I1014 13:45:02.916274 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c13fb15-c5d9-44a3-bb97-4222cce2b994-kube-api-access-rqst7" (OuterVolumeSpecName: "kube-api-access-rqst7") pod "7c13fb15-c5d9-44a3-bb97-4222cce2b994" (UID: "7c13fb15-c5d9-44a3-bb97-4222cce2b994"). InnerVolumeSpecName "kube-api-access-rqst7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:45:02 crc kubenswrapper[4725]: I1014 13:45:02.922242 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c13fb15-c5d9-44a3-bb97-4222cce2b994-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7c13fb15-c5d9-44a3-bb97-4222cce2b994" (UID: "7c13fb15-c5d9-44a3-bb97-4222cce2b994"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:45:03 crc kubenswrapper[4725]: I1014 13:45:03.012671 4725 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7c13fb15-c5d9-44a3-bb97-4222cce2b994-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 13:45:03 crc kubenswrapper[4725]: I1014 13:45:03.012728 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqst7\" (UniqueName: \"kubernetes.io/projected/7c13fb15-c5d9-44a3-bb97-4222cce2b994-kube-api-access-rqst7\") on node \"crc\" DevicePath \"\"" Oct 14 13:45:03 crc kubenswrapper[4725]: I1014 13:45:03.479247 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340825-z84z6" event={"ID":"7c13fb15-c5d9-44a3-bb97-4222cce2b994","Type":"ContainerDied","Data":"6cc809c4dabc2d36bf865f70a901046deb2dbb31d5cb6308b4942f67d60f9eb0"} Oct 14 13:45:03 crc kubenswrapper[4725]: I1014 13:45:03.479671 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cc809c4dabc2d36bf865f70a901046deb2dbb31d5cb6308b4942f67d60f9eb0" Oct 14 13:45:03 crc kubenswrapper[4725]: I1014 13:45:03.479395 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340825-z84z6" Oct 14 13:45:08 crc kubenswrapper[4725]: I1014 13:45:08.922351 4725 scope.go:117] "RemoveContainer" containerID="6f767896ac68228fe09840fae7ddd8f8c1d269f215009c642a1baacf3a7849d4" Oct 14 13:45:08 crc kubenswrapper[4725]: E1014 13:45:08.923687 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:45:17 crc kubenswrapper[4725]: I1014 13:45:17.635950 4725 generic.go:334] "Generic (PLEG): container finished" podID="d261a991-7df5-4a6b-981d-b191a3d4702b" containerID="186fe5a9bafa6bb9f88035a73bbfeab80c893f3b94903f079c06ba5e46969ff6" exitCode=2 Oct 14 13:45:17 crc kubenswrapper[4725]: I1014 13:45:17.636050 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnw6x" event={"ID":"d261a991-7df5-4a6b-981d-b191a3d4702b","Type":"ContainerDied","Data":"186fe5a9bafa6bb9f88035a73bbfeab80c893f3b94903f079c06ba5e46969ff6"} Oct 14 13:45:19 crc kubenswrapper[4725]: I1014 13:45:19.075333 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnw6x" Oct 14 13:45:19 crc kubenswrapper[4725]: I1014 13:45:19.161292 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d261a991-7df5-4a6b-981d-b191a3d4702b-ssh-key\") pod \"d261a991-7df5-4a6b-981d-b191a3d4702b\" (UID: \"d261a991-7df5-4a6b-981d-b191a3d4702b\") " Oct 14 13:45:19 crc kubenswrapper[4725]: I1014 13:45:19.161340 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d261a991-7df5-4a6b-981d-b191a3d4702b-inventory\") pod \"d261a991-7df5-4a6b-981d-b191a3d4702b\" (UID: \"d261a991-7df5-4a6b-981d-b191a3d4702b\") " Oct 14 13:45:19 crc kubenswrapper[4725]: I1014 13:45:19.161436 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkhdt\" (UniqueName: \"kubernetes.io/projected/d261a991-7df5-4a6b-981d-b191a3d4702b-kube-api-access-qkhdt\") pod \"d261a991-7df5-4a6b-981d-b191a3d4702b\" (UID: \"d261a991-7df5-4a6b-981d-b191a3d4702b\") " Oct 14 13:45:19 crc kubenswrapper[4725]: I1014 13:45:19.167014 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d261a991-7df5-4a6b-981d-b191a3d4702b-kube-api-access-qkhdt" (OuterVolumeSpecName: "kube-api-access-qkhdt") pod "d261a991-7df5-4a6b-981d-b191a3d4702b" (UID: "d261a991-7df5-4a6b-981d-b191a3d4702b"). InnerVolumeSpecName "kube-api-access-qkhdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:45:19 crc kubenswrapper[4725]: I1014 13:45:19.199172 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d261a991-7df5-4a6b-981d-b191a3d4702b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d261a991-7df5-4a6b-981d-b191a3d4702b" (UID: "d261a991-7df5-4a6b-981d-b191a3d4702b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:45:19 crc kubenswrapper[4725]: I1014 13:45:19.209429 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d261a991-7df5-4a6b-981d-b191a3d4702b-inventory" (OuterVolumeSpecName: "inventory") pod "d261a991-7df5-4a6b-981d-b191a3d4702b" (UID: "d261a991-7df5-4a6b-981d-b191a3d4702b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:45:19 crc kubenswrapper[4725]: I1014 13:45:19.264917 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d261a991-7df5-4a6b-981d-b191a3d4702b-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 13:45:19 crc kubenswrapper[4725]: I1014 13:45:19.264980 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d261a991-7df5-4a6b-981d-b191a3d4702b-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 13:45:19 crc kubenswrapper[4725]: I1014 13:45:19.265011 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkhdt\" (UniqueName: \"kubernetes.io/projected/d261a991-7df5-4a6b-981d-b191a3d4702b-kube-api-access-qkhdt\") on node \"crc\" DevicePath \"\"" Oct 14 13:45:19 crc kubenswrapper[4725]: I1014 13:45:19.662095 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnw6x" event={"ID":"d261a991-7df5-4a6b-981d-b191a3d4702b","Type":"ContainerDied","Data":"db89c6b32f768667076492e5a99e01cdeb2d01abb3050f2e9c468152a024d63e"} Oct 14 13:45:19 crc kubenswrapper[4725]: I1014 13:45:19.662567 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db89c6b32f768667076492e5a99e01cdeb2d01abb3050f2e9c468152a024d63e" Oct 14 13:45:19 crc kubenswrapper[4725]: I1014 13:45:19.662208 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mnw6x" Oct 14 13:45:19 crc kubenswrapper[4725]: I1014 13:45:19.921429 4725 scope.go:117] "RemoveContainer" containerID="6f767896ac68228fe09840fae7ddd8f8c1d269f215009c642a1baacf3a7849d4" Oct 14 13:45:19 crc kubenswrapper[4725]: E1014 13:45:19.922143 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:45:22 crc kubenswrapper[4725]: I1014 13:45:22.049881 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-tllk2"] Oct 14 13:45:22 crc kubenswrapper[4725]: I1014 13:45:22.056230 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-tllk2"] Oct 14 13:45:23 crc kubenswrapper[4725]: I1014 13:45:23.937236 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d44c6f67-5633-4639-a1e6-98a70a8c7a97" path="/var/lib/kubelet/pods/d44c6f67-5633-4639-a1e6-98a70a8c7a97/volumes" Oct 14 13:45:27 crc kubenswrapper[4725]: I1014 13:45:27.045056 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8xm9"] Oct 14 13:45:27 crc kubenswrapper[4725]: E1014 13:45:27.046330 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d261a991-7df5-4a6b-981d-b191a3d4702b" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 14 13:45:27 crc kubenswrapper[4725]: I1014 13:45:27.046363 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d261a991-7df5-4a6b-981d-b191a3d4702b" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 14 13:45:27 crc kubenswrapper[4725]: E1014 13:45:27.046403 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c13fb15-c5d9-44a3-bb97-4222cce2b994" containerName="collect-profiles" Oct 14 13:45:27 crc kubenswrapper[4725]: I1014 13:45:27.046418 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c13fb15-c5d9-44a3-bb97-4222cce2b994" containerName="collect-profiles" Oct 14 13:45:27 crc kubenswrapper[4725]: I1014 13:45:27.046811 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="d261a991-7df5-4a6b-981d-b191a3d4702b" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 14 13:45:27 crc kubenswrapper[4725]: I1014 13:45:27.046845 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c13fb15-c5d9-44a3-bb97-4222cce2b994" containerName="collect-profiles" Oct 14 13:45:27 crc kubenswrapper[4725]: I1014 13:45:27.052695 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8xm9" Oct 14 13:45:27 crc kubenswrapper[4725]: I1014 13:45:27.055395 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 13:45:27 crc kubenswrapper[4725]: I1014 13:45:27.056313 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qnvbv" Oct 14 13:45:27 crc kubenswrapper[4725]: I1014 13:45:27.056481 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 13:45:27 crc kubenswrapper[4725]: I1014 13:45:27.056600 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 13:45:27 crc kubenswrapper[4725]: I1014 13:45:27.058347 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8xm9"] Oct 14 13:45:27 crc kubenswrapper[4725]: I1014 13:45:27.149854 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ghbz\" (UniqueName: \"kubernetes.io/projected/b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005-kube-api-access-2ghbz\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-x8xm9\" (UID: \"b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8xm9" Oct 14 13:45:27 crc kubenswrapper[4725]: I1014 13:45:27.149907 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-x8xm9\" (UID: \"b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8xm9" Oct 14 13:45:27 crc kubenswrapper[4725]: I1014 13:45:27.150006 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-x8xm9\" (UID: \"b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8xm9" Oct 14 13:45:27 crc kubenswrapper[4725]: I1014 13:45:27.252522 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ghbz\" (UniqueName: \"kubernetes.io/projected/b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005-kube-api-access-2ghbz\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-x8xm9\" (UID: \"b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8xm9" Oct 14 13:45:27 crc kubenswrapper[4725]: I1014 13:45:27.252576 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-x8xm9\" (UID: \"b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8xm9" Oct 14 13:45:27 crc kubenswrapper[4725]: I1014 13:45:27.252670 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-x8xm9\" (UID: \"b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8xm9" Oct 14 13:45:27 crc kubenswrapper[4725]: I1014 13:45:27.275884 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-x8xm9\" (UID: \"b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8xm9" Oct 14 13:45:27 crc kubenswrapper[4725]: I1014 13:45:27.276854 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-x8xm9\" (UID: \"b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8xm9" Oct 14 13:45:27 crc kubenswrapper[4725]: I1014 13:45:27.280766 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ghbz\" (UniqueName: \"kubernetes.io/projected/b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005-kube-api-access-2ghbz\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-x8xm9\" (UID: \"b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8xm9" Oct 14 13:45:27 crc kubenswrapper[4725]: I1014 13:45:27.377258 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8xm9" Oct 14 13:45:27 crc kubenswrapper[4725]: I1014 13:45:27.889035 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8xm9"] Oct 14 13:45:28 crc kubenswrapper[4725]: I1014 13:45:28.742792 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8xm9" event={"ID":"b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005","Type":"ContainerStarted","Data":"67ac409ccb19d186076e9370fe6f6279196cdb7f1dc1f314537eb66020adba95"} Oct 14 13:45:28 crc kubenswrapper[4725]: I1014 13:45:28.743318 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8xm9" event={"ID":"b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005","Type":"ContainerStarted","Data":"4119d108e151cc9ce33f3cd92b7f927fb9341aae196a833c0e18a9e05396104d"} Oct 14 13:45:28 crc kubenswrapper[4725]: I1014 13:45:28.769746 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8xm9" podStartSLOduration=1.338790312 podStartE2EDuration="1.769723876s" podCreationTimestamp="2025-10-14 13:45:27 +0000 UTC" firstStartedPulling="2025-10-14 13:45:27.891390564 +0000 UTC m=+1844.739825373" lastFinishedPulling="2025-10-14 13:45:28.322324128 +0000 UTC m=+1845.170758937" observedRunningTime="2025-10-14 13:45:28.760256106 +0000 UTC m=+1845.608690935" watchObservedRunningTime="2025-10-14 13:45:28.769723876 +0000 UTC m=+1845.618158695" Oct 14 13:45:33 crc kubenswrapper[4725]: I1014 13:45:33.930038 4725 scope.go:117] "RemoveContainer" containerID="6f767896ac68228fe09840fae7ddd8f8c1d269f215009c642a1baacf3a7849d4" Oct 14 13:45:34 crc kubenswrapper[4725]: I1014 13:45:34.832279 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" event={"ID":"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c","Type":"ContainerStarted","Data":"b9f10e2afe51a3b5ed3de279d9f89815b29b05eb672bbd7405939a8324f43233"} Oct 14 13:45:48 crc kubenswrapper[4725]: I1014 13:45:48.058611 4725 scope.go:117] "RemoveContainer" containerID="ed51e96eabeb2b7aa18f0a343c1e41bb3720e77c94726c9a1917cf7efb679461" Oct 14 13:46:16 crc kubenswrapper[4725]: I1014 13:46:16.241360 4725 generic.go:334] "Generic (PLEG): container finished" podID="b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005" containerID="67ac409ccb19d186076e9370fe6f6279196cdb7f1dc1f314537eb66020adba95" exitCode=0 Oct 14 13:46:16 crc kubenswrapper[4725]: I1014 13:46:16.241483 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8xm9" event={"ID":"b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005","Type":"ContainerDied","Data":"67ac409ccb19d186076e9370fe6f6279196cdb7f1dc1f314537eb66020adba95"} Oct 14 13:46:17 crc kubenswrapper[4725]: I1014 13:46:17.723649 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8xm9" Oct 14 13:46:17 crc kubenswrapper[4725]: I1014 13:46:17.793496 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005-ssh-key\") pod \"b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005\" (UID: \"b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005\") " Oct 14 13:46:17 crc kubenswrapper[4725]: I1014 13:46:17.793574 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005-inventory\") pod \"b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005\" (UID: \"b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005\") " Oct 14 13:46:17 crc kubenswrapper[4725]: I1014 13:46:17.793668 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ghbz\" (UniqueName: \"kubernetes.io/projected/b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005-kube-api-access-2ghbz\") pod \"b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005\" (UID: \"b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005\") " Oct 14 13:46:17 crc kubenswrapper[4725]: I1014 13:46:17.800165 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005-kube-api-access-2ghbz" (OuterVolumeSpecName: "kube-api-access-2ghbz") pod "b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005" (UID: "b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005"). InnerVolumeSpecName "kube-api-access-2ghbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:46:17 crc kubenswrapper[4725]: I1014 13:46:17.825249 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005-inventory" (OuterVolumeSpecName: "inventory") pod "b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005" (UID: "b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:46:17 crc kubenswrapper[4725]: I1014 13:46:17.826294 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005" (UID: "b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:46:17 crc kubenswrapper[4725]: I1014 13:46:17.896392 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 13:46:17 crc kubenswrapper[4725]: I1014 13:46:17.896582 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 13:46:17 crc kubenswrapper[4725]: I1014 13:46:17.896593 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ghbz\" (UniqueName: \"kubernetes.io/projected/b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005-kube-api-access-2ghbz\") on node \"crc\" DevicePath \"\"" Oct 14 13:46:18 crc kubenswrapper[4725]: I1014 13:46:18.259524 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8xm9" event={"ID":"b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005","Type":"ContainerDied","Data":"4119d108e151cc9ce33f3cd92b7f927fb9341aae196a833c0e18a9e05396104d"} Oct 14 13:46:18 crc kubenswrapper[4725]: I1014 13:46:18.259767 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4119d108e151cc9ce33f3cd92b7f927fb9341aae196a833c0e18a9e05396104d" Oct 14 13:46:18 crc kubenswrapper[4725]: I1014 13:46:18.259572 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-x8xm9" Oct 14 13:46:18 crc kubenswrapper[4725]: I1014 13:46:18.352540 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qblng"] Oct 14 13:46:18 crc kubenswrapper[4725]: E1014 13:46:18.353039 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 14 13:46:18 crc kubenswrapper[4725]: I1014 13:46:18.353061 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 14 13:46:18 crc kubenswrapper[4725]: I1014 13:46:18.353304 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 14 13:46:18 crc kubenswrapper[4725]: I1014 13:46:18.354142 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qblng" Oct 14 13:46:18 crc kubenswrapper[4725]: I1014 13:46:18.356283 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 13:46:18 crc kubenswrapper[4725]: I1014 13:46:18.356906 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qnvbv" Oct 14 13:46:18 crc kubenswrapper[4725]: I1014 13:46:18.357406 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 13:46:18 crc kubenswrapper[4725]: I1014 13:46:18.357412 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 13:46:18 crc kubenswrapper[4725]: I1014 13:46:18.360818 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qblng"] Oct 14 13:46:18 crc kubenswrapper[4725]: I1014 13:46:18.508780 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f9161175-d899-4a2e-89cc-f49f51470e2f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qblng\" (UID: \"f9161175-d899-4a2e-89cc-f49f51470e2f\") " pod="openstack/ssh-known-hosts-edpm-deployment-qblng" Oct 14 13:46:18 crc kubenswrapper[4725]: I1014 13:46:18.508899 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwb6w\" (UniqueName: \"kubernetes.io/projected/f9161175-d899-4a2e-89cc-f49f51470e2f-kube-api-access-hwb6w\") pod \"ssh-known-hosts-edpm-deployment-qblng\" (UID: \"f9161175-d899-4a2e-89cc-f49f51470e2f\") " pod="openstack/ssh-known-hosts-edpm-deployment-qblng" Oct 14 13:46:18 crc kubenswrapper[4725]: I1014 13:46:18.509051 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9161175-d899-4a2e-89cc-f49f51470e2f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qblng\" (UID: \"f9161175-d899-4a2e-89cc-f49f51470e2f\") " pod="openstack/ssh-known-hosts-edpm-deployment-qblng" Oct 14 13:46:18 crc kubenswrapper[4725]: I1014 13:46:18.610504 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9161175-d899-4a2e-89cc-f49f51470e2f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qblng\" (UID: \"f9161175-d899-4a2e-89cc-f49f51470e2f\") " pod="openstack/ssh-known-hosts-edpm-deployment-qblng" Oct 14 13:46:18 crc kubenswrapper[4725]: I1014 13:46:18.610903 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f9161175-d899-4a2e-89cc-f49f51470e2f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qblng\" (UID: \"f9161175-d899-4a2e-89cc-f49f51470e2f\") " pod="openstack/ssh-known-hosts-edpm-deployment-qblng" Oct 14 13:46:18 crc kubenswrapper[4725]: I1014 13:46:18.611084 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwb6w\" (UniqueName: \"kubernetes.io/projected/f9161175-d899-4a2e-89cc-f49f51470e2f-kube-api-access-hwb6w\") pod \"ssh-known-hosts-edpm-deployment-qblng\" (UID: \"f9161175-d899-4a2e-89cc-f49f51470e2f\") " pod="openstack/ssh-known-hosts-edpm-deployment-qblng" Oct 14 13:46:18 crc kubenswrapper[4725]: I1014 13:46:18.615404 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f9161175-d899-4a2e-89cc-f49f51470e2f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qblng\" (UID: \"f9161175-d899-4a2e-89cc-f49f51470e2f\") " pod="openstack/ssh-known-hosts-edpm-deployment-qblng" Oct 14 13:46:18 crc kubenswrapper[4725]: I1014 13:46:18.617703 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9161175-d899-4a2e-89cc-f49f51470e2f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qblng\" (UID: \"f9161175-d899-4a2e-89cc-f49f51470e2f\") " pod="openstack/ssh-known-hosts-edpm-deployment-qblng" Oct 14 13:46:18 crc kubenswrapper[4725]: I1014 13:46:18.632175 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwb6w\" (UniqueName: \"kubernetes.io/projected/f9161175-d899-4a2e-89cc-f49f51470e2f-kube-api-access-hwb6w\") pod \"ssh-known-hosts-edpm-deployment-qblng\" (UID: \"f9161175-d899-4a2e-89cc-f49f51470e2f\") " pod="openstack/ssh-known-hosts-edpm-deployment-qblng" Oct 14 13:46:18 crc kubenswrapper[4725]: I1014 13:46:18.682122 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qblng" Oct 14 13:46:19 crc kubenswrapper[4725]: I1014 13:46:19.230612 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qblng"] Oct 14 13:46:19 crc kubenswrapper[4725]: I1014 13:46:19.270420 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qblng" event={"ID":"f9161175-d899-4a2e-89cc-f49f51470e2f","Type":"ContainerStarted","Data":"04ffeeb152a02ad7b1b8e66d0a99463303bff81c07ea7f93645b5a9e4e5a5ccb"} Oct 14 13:46:20 crc kubenswrapper[4725]: I1014 13:46:20.280216 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qblng" event={"ID":"f9161175-d899-4a2e-89cc-f49f51470e2f","Type":"ContainerStarted","Data":"647b054959b841f12553fbc6fe37a54376ca67e8fd99877942c8f26ce90d7dce"} Oct 14 13:46:20 crc kubenswrapper[4725]: I1014 13:46:20.307578 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-qblng" podStartSLOduration=1.814668895 podStartE2EDuration="2.307562154s" podCreationTimestamp="2025-10-14 13:46:18 +0000 UTC" firstStartedPulling="2025-10-14 13:46:19.247881623 +0000 UTC m=+1896.096316442" lastFinishedPulling="2025-10-14 13:46:19.740774852 +0000 UTC m=+1896.589209701" observedRunningTime="2025-10-14 13:46:20.302856858 +0000 UTC m=+1897.151291667" watchObservedRunningTime="2025-10-14 13:46:20.307562154 +0000 UTC m=+1897.155996963" Oct 14 13:46:27 crc kubenswrapper[4725]: I1014 13:46:27.341015 4725 generic.go:334] "Generic (PLEG): container finished" podID="f9161175-d899-4a2e-89cc-f49f51470e2f" containerID="647b054959b841f12553fbc6fe37a54376ca67e8fd99877942c8f26ce90d7dce" exitCode=0 Oct 14 13:46:27 crc kubenswrapper[4725]: I1014 13:46:27.341303 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qblng" event={"ID":"f9161175-d899-4a2e-89cc-f49f51470e2f","Type":"ContainerDied","Data":"647b054959b841f12553fbc6fe37a54376ca67e8fd99877942c8f26ce90d7dce"} Oct 14 13:46:28 crc kubenswrapper[4725]: I1014 13:46:28.834622 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qblng" Oct 14 13:46:28 crc kubenswrapper[4725]: I1014 13:46:28.909406 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9161175-d899-4a2e-89cc-f49f51470e2f-ssh-key-openstack-edpm-ipam\") pod \"f9161175-d899-4a2e-89cc-f49f51470e2f\" (UID: \"f9161175-d899-4a2e-89cc-f49f51470e2f\") " Oct 14 13:46:28 crc kubenswrapper[4725]: I1014 13:46:28.909503 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwb6w\" (UniqueName: \"kubernetes.io/projected/f9161175-d899-4a2e-89cc-f49f51470e2f-kube-api-access-hwb6w\") pod \"f9161175-d899-4a2e-89cc-f49f51470e2f\" (UID: \"f9161175-d899-4a2e-89cc-f49f51470e2f\") " Oct 14 13:46:28 crc kubenswrapper[4725]: I1014 13:46:28.909561 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f9161175-d899-4a2e-89cc-f49f51470e2f-inventory-0\") pod \"f9161175-d899-4a2e-89cc-f49f51470e2f\" (UID: \"f9161175-d899-4a2e-89cc-f49f51470e2f\") " Oct 14 13:46:28 crc kubenswrapper[4725]: I1014 13:46:28.917931 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9161175-d899-4a2e-89cc-f49f51470e2f-kube-api-access-hwb6w" (OuterVolumeSpecName: "kube-api-access-hwb6w") pod "f9161175-d899-4a2e-89cc-f49f51470e2f" (UID: "f9161175-d899-4a2e-89cc-f49f51470e2f"). InnerVolumeSpecName "kube-api-access-hwb6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:46:28 crc kubenswrapper[4725]: I1014 13:46:28.938380 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9161175-d899-4a2e-89cc-f49f51470e2f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f9161175-d899-4a2e-89cc-f49f51470e2f" (UID: "f9161175-d899-4a2e-89cc-f49f51470e2f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:46:28 crc kubenswrapper[4725]: I1014 13:46:28.959737 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9161175-d899-4a2e-89cc-f49f51470e2f-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "f9161175-d899-4a2e-89cc-f49f51470e2f" (UID: "f9161175-d899-4a2e-89cc-f49f51470e2f"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:46:29 crc kubenswrapper[4725]: I1014 13:46:29.013796 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f9161175-d899-4a2e-89cc-f49f51470e2f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 14 13:46:29 crc kubenswrapper[4725]: I1014 13:46:29.013836 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwb6w\" (UniqueName: \"kubernetes.io/projected/f9161175-d899-4a2e-89cc-f49f51470e2f-kube-api-access-hwb6w\") on node \"crc\" DevicePath \"\"" Oct 14 13:46:29 crc kubenswrapper[4725]: I1014 13:46:29.013857 4725 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f9161175-d899-4a2e-89cc-f49f51470e2f-inventory-0\") on node \"crc\" DevicePath \"\"" Oct 14 13:46:29 crc kubenswrapper[4725]: I1014 13:46:29.371271 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qblng" event={"ID":"f9161175-d899-4a2e-89cc-f49f51470e2f","Type":"ContainerDied","Data":"04ffeeb152a02ad7b1b8e66d0a99463303bff81c07ea7f93645b5a9e4e5a5ccb"} Oct 14 13:46:29 crc kubenswrapper[4725]: I1014 13:46:29.371572 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04ffeeb152a02ad7b1b8e66d0a99463303bff81c07ea7f93645b5a9e4e5a5ccb" Oct 14 13:46:29 crc kubenswrapper[4725]: I1014 13:46:29.371473 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qblng" Oct 14 13:46:29 crc kubenswrapper[4725]: I1014 13:46:29.451650 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-594bc"] Oct 14 13:46:29 crc kubenswrapper[4725]: E1014 13:46:29.452101 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9161175-d899-4a2e-89cc-f49f51470e2f" containerName="ssh-known-hosts-edpm-deployment" Oct 14 13:46:29 crc kubenswrapper[4725]: I1014 13:46:29.452125 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9161175-d899-4a2e-89cc-f49f51470e2f" containerName="ssh-known-hosts-edpm-deployment" Oct 14 13:46:29 crc kubenswrapper[4725]: I1014 13:46:29.452413 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9161175-d899-4a2e-89cc-f49f51470e2f" containerName="ssh-known-hosts-edpm-deployment" Oct 14 13:46:29 crc kubenswrapper[4725]: I1014 13:46:29.453172 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-594bc" Oct 14 13:46:29 crc kubenswrapper[4725]: I1014 13:46:29.455991 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 13:46:29 crc kubenswrapper[4725]: I1014 13:46:29.464608 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-594bc"] Oct 14 13:46:29 crc kubenswrapper[4725]: I1014 13:46:29.466857 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 13:46:29 crc kubenswrapper[4725]: I1014 13:46:29.467610 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qnvbv" Oct 14 13:46:29 crc kubenswrapper[4725]: I1014 13:46:29.468172 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 13:46:29 crc kubenswrapper[4725]: I1014 13:46:29.530161 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9693426d-0bd5-4a57-84ca-6491f9fdc1a0-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-594bc\" (UID: \"9693426d-0bd5-4a57-84ca-6491f9fdc1a0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-594bc" Oct 14 13:46:29 crc kubenswrapper[4725]: I1014 13:46:29.530312 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5vck\" (UniqueName: \"kubernetes.io/projected/9693426d-0bd5-4a57-84ca-6491f9fdc1a0-kube-api-access-s5vck\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-594bc\" (UID: \"9693426d-0bd5-4a57-84ca-6491f9fdc1a0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-594bc" Oct 14 13:46:29 crc kubenswrapper[4725]: I1014 13:46:29.530426 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9693426d-0bd5-4a57-84ca-6491f9fdc1a0-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-594bc\" (UID: \"9693426d-0bd5-4a57-84ca-6491f9fdc1a0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-594bc" Oct 14 13:46:29 crc kubenswrapper[4725]: I1014 13:46:29.632513 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9693426d-0bd5-4a57-84ca-6491f9fdc1a0-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-594bc\" (UID: \"9693426d-0bd5-4a57-84ca-6491f9fdc1a0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-594bc" Oct 14 13:46:29 crc kubenswrapper[4725]: I1014 13:46:29.632566 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9693426d-0bd5-4a57-84ca-6491f9fdc1a0-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-594bc\" (UID: \"9693426d-0bd5-4a57-84ca-6491f9fdc1a0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-594bc" Oct 14 13:46:29 crc kubenswrapper[4725]: I1014 13:46:29.632666 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5vck\" (UniqueName: \"kubernetes.io/projected/9693426d-0bd5-4a57-84ca-6491f9fdc1a0-kube-api-access-s5vck\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-594bc\" (UID: \"9693426d-0bd5-4a57-84ca-6491f9fdc1a0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-594bc" Oct 14 13:46:29 crc kubenswrapper[4725]: I1014 13:46:29.637668 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9693426d-0bd5-4a57-84ca-6491f9fdc1a0-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-594bc\" (UID: \"9693426d-0bd5-4a57-84ca-6491f9fdc1a0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-594bc" Oct 14 13:46:29 crc kubenswrapper[4725]: I1014 13:46:29.649416 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9693426d-0bd5-4a57-84ca-6491f9fdc1a0-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-594bc\" (UID: \"9693426d-0bd5-4a57-84ca-6491f9fdc1a0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-594bc" Oct 14 13:46:29 crc kubenswrapper[4725]: I1014 13:46:29.662356 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5vck\" (UniqueName: \"kubernetes.io/projected/9693426d-0bd5-4a57-84ca-6491f9fdc1a0-kube-api-access-s5vck\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-594bc\" (UID: \"9693426d-0bd5-4a57-84ca-6491f9fdc1a0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-594bc" Oct 14 13:46:29 crc kubenswrapper[4725]: I1014 13:46:29.841233 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-594bc" Oct 14 13:46:30 crc kubenswrapper[4725]: I1014 13:46:30.402068 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-594bc"] Oct 14 13:46:31 crc kubenswrapper[4725]: I1014 13:46:31.397041 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-594bc" event={"ID":"9693426d-0bd5-4a57-84ca-6491f9fdc1a0","Type":"ContainerStarted","Data":"1e9cec830d0ab6e73903283283aaa07cc4a3122cafec18a944e966a8dc447676"} Oct 14 13:46:31 crc kubenswrapper[4725]: I1014 13:46:31.397402 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-594bc" event={"ID":"9693426d-0bd5-4a57-84ca-6491f9fdc1a0","Type":"ContainerStarted","Data":"4b8249eb7c393e8aef29fbd7b98dd6e7742149d92fb19db942eb70ce66c90b3c"} Oct 14 13:46:40 crc kubenswrapper[4725]: I1014 13:46:40.483508 4725 generic.go:334] "Generic (PLEG): container finished" podID="9693426d-0bd5-4a57-84ca-6491f9fdc1a0" containerID="1e9cec830d0ab6e73903283283aaa07cc4a3122cafec18a944e966a8dc447676" exitCode=0 Oct 14 13:46:40 crc kubenswrapper[4725]: I1014 13:46:40.483544 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-594bc" event={"ID":"9693426d-0bd5-4a57-84ca-6491f9fdc1a0","Type":"ContainerDied","Data":"1e9cec830d0ab6e73903283283aaa07cc4a3122cafec18a944e966a8dc447676"} Oct 14 13:46:41 crc kubenswrapper[4725]: I1014 13:46:41.962521 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-594bc" Oct 14 13:46:41 crc kubenswrapper[4725]: I1014 13:46:41.994988 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5vck\" (UniqueName: \"kubernetes.io/projected/9693426d-0bd5-4a57-84ca-6491f9fdc1a0-kube-api-access-s5vck\") pod \"9693426d-0bd5-4a57-84ca-6491f9fdc1a0\" (UID: \"9693426d-0bd5-4a57-84ca-6491f9fdc1a0\") " Oct 14 13:46:41 crc kubenswrapper[4725]: I1014 13:46:41.995103 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9693426d-0bd5-4a57-84ca-6491f9fdc1a0-inventory\") pod \"9693426d-0bd5-4a57-84ca-6491f9fdc1a0\" (UID: \"9693426d-0bd5-4a57-84ca-6491f9fdc1a0\") " Oct 14 13:46:41 crc kubenswrapper[4725]: I1014 13:46:41.995197 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9693426d-0bd5-4a57-84ca-6491f9fdc1a0-ssh-key\") pod \"9693426d-0bd5-4a57-84ca-6491f9fdc1a0\" (UID: \"9693426d-0bd5-4a57-84ca-6491f9fdc1a0\") " Oct 14 13:46:42 crc kubenswrapper[4725]: I1014 13:46:42.008862 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9693426d-0bd5-4a57-84ca-6491f9fdc1a0-kube-api-access-s5vck" (OuterVolumeSpecName: "kube-api-access-s5vck") pod "9693426d-0bd5-4a57-84ca-6491f9fdc1a0" (UID: "9693426d-0bd5-4a57-84ca-6491f9fdc1a0"). InnerVolumeSpecName "kube-api-access-s5vck". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:46:42 crc kubenswrapper[4725]: I1014 13:46:42.028187 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9693426d-0bd5-4a57-84ca-6491f9fdc1a0-inventory" (OuterVolumeSpecName: "inventory") pod "9693426d-0bd5-4a57-84ca-6491f9fdc1a0" (UID: "9693426d-0bd5-4a57-84ca-6491f9fdc1a0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:46:42 crc kubenswrapper[4725]: I1014 13:46:42.029037 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9693426d-0bd5-4a57-84ca-6491f9fdc1a0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9693426d-0bd5-4a57-84ca-6491f9fdc1a0" (UID: "9693426d-0bd5-4a57-84ca-6491f9fdc1a0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:46:42 crc kubenswrapper[4725]: I1014 13:46:42.096991 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9693426d-0bd5-4a57-84ca-6491f9fdc1a0-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 13:46:42 crc kubenswrapper[4725]: I1014 13:46:42.097043 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5vck\" (UniqueName: \"kubernetes.io/projected/9693426d-0bd5-4a57-84ca-6491f9fdc1a0-kube-api-access-s5vck\") on node \"crc\" DevicePath \"\"" Oct 14 13:46:42 crc kubenswrapper[4725]: I1014 13:46:42.097054 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9693426d-0bd5-4a57-84ca-6491f9fdc1a0-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 13:46:42 crc kubenswrapper[4725]: I1014 13:46:42.504103 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-594bc" event={"ID":"9693426d-0bd5-4a57-84ca-6491f9fdc1a0","Type":"ContainerDied","Data":"4b8249eb7c393e8aef29fbd7b98dd6e7742149d92fb19db942eb70ce66c90b3c"} Oct 14 13:46:42 crc kubenswrapper[4725]: I1014 13:46:42.504143 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b8249eb7c393e8aef29fbd7b98dd6e7742149d92fb19db942eb70ce66c90b3c" Oct 14 13:46:42 crc kubenswrapper[4725]: I1014 13:46:42.504194 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-594bc" Oct 14 13:46:42 crc kubenswrapper[4725]: I1014 13:46:42.617320 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bhl58"] Oct 14 13:46:42 crc kubenswrapper[4725]: E1014 13:46:42.618035 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9693426d-0bd5-4a57-84ca-6491f9fdc1a0" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 14 13:46:42 crc kubenswrapper[4725]: I1014 13:46:42.618059 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9693426d-0bd5-4a57-84ca-6491f9fdc1a0" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 14 13:46:42 crc kubenswrapper[4725]: I1014 13:46:42.618304 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9693426d-0bd5-4a57-84ca-6491f9fdc1a0" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 14 13:46:42 crc kubenswrapper[4725]: I1014 13:46:42.619070 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bhl58" Oct 14 13:46:42 crc kubenswrapper[4725]: I1014 13:46:42.623906 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 13:46:42 crc kubenswrapper[4725]: I1014 13:46:42.623932 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qnvbv" Oct 14 13:46:42 crc kubenswrapper[4725]: I1014 13:46:42.624158 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 13:46:42 crc kubenswrapper[4725]: I1014 13:46:42.625522 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 13:46:42 crc kubenswrapper[4725]: I1014 13:46:42.631080 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bhl58"] Oct 14 13:46:42 crc kubenswrapper[4725]: I1014 13:46:42.706471 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc2f4f76-dc9b-433c-81f6-dabc27cf63c9-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bhl58\" (UID: \"dc2f4f76-dc9b-433c-81f6-dabc27cf63c9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bhl58" Oct 14 13:46:42 crc kubenswrapper[4725]: I1014 13:46:42.706566 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmdnd\" (UniqueName: \"kubernetes.io/projected/dc2f4f76-dc9b-433c-81f6-dabc27cf63c9-kube-api-access-vmdnd\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bhl58\" (UID: \"dc2f4f76-dc9b-433c-81f6-dabc27cf63c9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bhl58" Oct 14 13:46:42 crc kubenswrapper[4725]: I1014 13:46:42.706682 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc2f4f76-dc9b-433c-81f6-dabc27cf63c9-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bhl58\" (UID: \"dc2f4f76-dc9b-433c-81f6-dabc27cf63c9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bhl58" Oct 14 13:46:42 crc kubenswrapper[4725]: I1014 13:46:42.808402 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc2f4f76-dc9b-433c-81f6-dabc27cf63c9-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bhl58\" (UID: \"dc2f4f76-dc9b-433c-81f6-dabc27cf63c9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bhl58" Oct 14 13:46:42 crc kubenswrapper[4725]: I1014 13:46:42.808500 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmdnd\" (UniqueName: \"kubernetes.io/projected/dc2f4f76-dc9b-433c-81f6-dabc27cf63c9-kube-api-access-vmdnd\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bhl58\" (UID: \"dc2f4f76-dc9b-433c-81f6-dabc27cf63c9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bhl58" Oct 14 13:46:42 crc kubenswrapper[4725]: I1014 13:46:42.808564 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc2f4f76-dc9b-433c-81f6-dabc27cf63c9-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bhl58\" (UID: \"dc2f4f76-dc9b-433c-81f6-dabc27cf63c9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bhl58" Oct 14 13:46:42 crc kubenswrapper[4725]: I1014 13:46:42.817147 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc2f4f76-dc9b-433c-81f6-dabc27cf63c9-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bhl58\" (UID: \"dc2f4f76-dc9b-433c-81f6-dabc27cf63c9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bhl58" Oct 14 13:46:42 crc kubenswrapper[4725]: I1014 13:46:42.817326 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc2f4f76-dc9b-433c-81f6-dabc27cf63c9-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bhl58\" (UID: \"dc2f4f76-dc9b-433c-81f6-dabc27cf63c9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bhl58" Oct 14 13:46:42 crc kubenswrapper[4725]: I1014 13:46:42.838101 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmdnd\" (UniqueName: \"kubernetes.io/projected/dc2f4f76-dc9b-433c-81f6-dabc27cf63c9-kube-api-access-vmdnd\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bhl58\" (UID: \"dc2f4f76-dc9b-433c-81f6-dabc27cf63c9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bhl58" Oct 14 13:46:42 crc kubenswrapper[4725]: I1014 13:46:42.958073 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bhl58" Oct 14 13:46:43 crc kubenswrapper[4725]: I1014 13:46:43.496369 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bhl58"] Oct 14 13:46:43 crc kubenswrapper[4725]: I1014 13:46:43.512906 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bhl58" event={"ID":"dc2f4f76-dc9b-433c-81f6-dabc27cf63c9","Type":"ContainerStarted","Data":"b3829fc7bdceb178658ac7a5aac70cba97aa5634401bd0d0ce365c3c2c96de77"} Oct 14 13:46:44 crc kubenswrapper[4725]: I1014 13:46:44.331190 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 13:46:45 crc kubenswrapper[4725]: I1014 13:46:45.531888 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bhl58" event={"ID":"dc2f4f76-dc9b-433c-81f6-dabc27cf63c9","Type":"ContainerStarted","Data":"d0e696ead67e49c794067e0367f65e6daa43bc2971adcf54712c6ff0f0565947"} Oct 14 13:46:45 crc kubenswrapper[4725]: I1014 13:46:45.560936 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bhl58" podStartSLOduration=2.728308782 podStartE2EDuration="3.560911961s" podCreationTimestamp="2025-10-14 13:46:42 +0000 UTC" firstStartedPulling="2025-10-14 13:46:43.495985916 +0000 UTC m=+1920.344420725" lastFinishedPulling="2025-10-14 13:46:44.328589095 +0000 UTC m=+1921.177023904" observedRunningTime="2025-10-14 13:46:45.555348101 +0000 UTC m=+1922.403782980" watchObservedRunningTime="2025-10-14 13:46:45.560911961 +0000 UTC m=+1922.409346800" Oct 14 13:46:54 crc kubenswrapper[4725]: I1014 13:46:54.631903 4725 generic.go:334] "Generic (PLEG): container finished" podID="dc2f4f76-dc9b-433c-81f6-dabc27cf63c9" containerID="d0e696ead67e49c794067e0367f65e6daa43bc2971adcf54712c6ff0f0565947" exitCode=0 Oct 14 13:46:54 crc kubenswrapper[4725]: I1014 13:46:54.632062 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bhl58" event={"ID":"dc2f4f76-dc9b-433c-81f6-dabc27cf63c9","Type":"ContainerDied","Data":"d0e696ead67e49c794067e0367f65e6daa43bc2971adcf54712c6ff0f0565947"} Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.100016 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bhl58" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.197492 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc2f4f76-dc9b-433c-81f6-dabc27cf63c9-inventory\") pod \"dc2f4f76-dc9b-433c-81f6-dabc27cf63c9\" (UID: \"dc2f4f76-dc9b-433c-81f6-dabc27cf63c9\") " Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.197867 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmdnd\" (UniqueName: \"kubernetes.io/projected/dc2f4f76-dc9b-433c-81f6-dabc27cf63c9-kube-api-access-vmdnd\") pod \"dc2f4f76-dc9b-433c-81f6-dabc27cf63c9\" (UID: \"dc2f4f76-dc9b-433c-81f6-dabc27cf63c9\") " Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.197988 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc2f4f76-dc9b-433c-81f6-dabc27cf63c9-ssh-key\") pod \"dc2f4f76-dc9b-433c-81f6-dabc27cf63c9\" (UID: \"dc2f4f76-dc9b-433c-81f6-dabc27cf63c9\") " Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.204717 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc2f4f76-dc9b-433c-81f6-dabc27cf63c9-kube-api-access-vmdnd" (OuterVolumeSpecName: "kube-api-access-vmdnd") pod "dc2f4f76-dc9b-433c-81f6-dabc27cf63c9" (UID: "dc2f4f76-dc9b-433c-81f6-dabc27cf63c9"). InnerVolumeSpecName "kube-api-access-vmdnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.233010 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc2f4f76-dc9b-433c-81f6-dabc27cf63c9-inventory" (OuterVolumeSpecName: "inventory") pod "dc2f4f76-dc9b-433c-81f6-dabc27cf63c9" (UID: "dc2f4f76-dc9b-433c-81f6-dabc27cf63c9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.249353 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc2f4f76-dc9b-433c-81f6-dabc27cf63c9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "dc2f4f76-dc9b-433c-81f6-dabc27cf63c9" (UID: "dc2f4f76-dc9b-433c-81f6-dabc27cf63c9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.300357 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc2f4f76-dc9b-433c-81f6-dabc27cf63c9-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.300529 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmdnd\" (UniqueName: \"kubernetes.io/projected/dc2f4f76-dc9b-433c-81f6-dabc27cf63c9-kube-api-access-vmdnd\") on node \"crc\" DevicePath \"\"" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.300566 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc2f4f76-dc9b-433c-81f6-dabc27cf63c9-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.654801 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bhl58" event={"ID":"dc2f4f76-dc9b-433c-81f6-dabc27cf63c9","Type":"ContainerDied","Data":"b3829fc7bdceb178658ac7a5aac70cba97aa5634401bd0d0ce365c3c2c96de77"} Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.654848 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3829fc7bdceb178658ac7a5aac70cba97aa5634401bd0d0ce365c3c2c96de77" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.654890 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bhl58" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.762812 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m"] Oct 14 13:46:56 crc kubenswrapper[4725]: E1014 13:46:56.763595 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc2f4f76-dc9b-433c-81f6-dabc27cf63c9" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.763632 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc2f4f76-dc9b-433c-81f6-dabc27cf63c9" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.763967 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc2f4f76-dc9b-433c-81f6-dabc27cf63c9" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.765145 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.770011 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m"] Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.775188 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.775378 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.775587 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.776124 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.776527 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.776734 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qnvbv" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.776842 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.776873 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.914821 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.915169 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f9f32163-993e-4610-8c70-aaae1d52dc40-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.915414 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.915647 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f9f32163-993e-4610-8c70-aaae1d52dc40-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.915714 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f9f32163-993e-4610-8c70-aaae1d52dc40-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.915844 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f9f32163-993e-4610-8c70-aaae1d52dc40-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.915900 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.915944 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.916082 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.916147 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.916210 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.916811 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.916877 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssw4p\" (UniqueName: \"kubernetes.io/projected/f9f32163-993e-4610-8c70-aaae1d52dc40-kube-api-access-ssw4p\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:56 crc kubenswrapper[4725]: I1014 13:46:56.917022 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:57 crc kubenswrapper[4725]: I1014 13:46:57.019848 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:57 crc kubenswrapper[4725]: I1014 13:46:57.020010 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f9f32163-993e-4610-8c70-aaae1d52dc40-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:57 crc kubenswrapper[4725]: I1014 13:46:57.020053 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f9f32163-993e-4610-8c70-aaae1d52dc40-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:57 crc kubenswrapper[4725]: I1014 13:46:57.020101 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f9f32163-993e-4610-8c70-aaae1d52dc40-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:57 crc kubenswrapper[4725]: I1014 13:46:57.020152 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:57 crc kubenswrapper[4725]: I1014 13:46:57.020189 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:57 crc kubenswrapper[4725]: I1014 13:46:57.020242 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:57 crc kubenswrapper[4725]: I1014 13:46:57.020299 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:57 crc kubenswrapper[4725]: I1014 13:46:57.020346 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:57 crc kubenswrapper[4725]: I1014 13:46:57.020398 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssw4p\" (UniqueName: \"kubernetes.io/projected/f9f32163-993e-4610-8c70-aaae1d52dc40-kube-api-access-ssw4p\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:57 crc kubenswrapper[4725]: I1014 13:46:57.020437 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:57 crc kubenswrapper[4725]: I1014 13:46:57.020522 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:57 crc kubenswrapper[4725]: I1014 13:46:57.020630 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:57 crc kubenswrapper[4725]: I1014 13:46:57.020742 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f9f32163-993e-4610-8c70-aaae1d52dc40-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:57 crc kubenswrapper[4725]: I1014 13:46:57.024247 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:57 crc kubenswrapper[4725]: I1014 13:46:57.024266 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:57 crc kubenswrapper[4725]: I1014 13:46:57.024374 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:57 crc kubenswrapper[4725]: I1014 13:46:57.027488 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:57 crc kubenswrapper[4725]: I1014 13:46:57.027814 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f9f32163-993e-4610-8c70-aaae1d52dc40-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:57 crc kubenswrapper[4725]: I1014 13:46:57.027848 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:57 crc kubenswrapper[4725]: I1014 13:46:57.027950 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f9f32163-993e-4610-8c70-aaae1d52dc40-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:57 crc kubenswrapper[4725]: I1014 13:46:57.030388 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:57 crc kubenswrapper[4725]: I1014 13:46:57.030565 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:57 crc kubenswrapper[4725]: I1014 13:46:57.030611 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:57 crc kubenswrapper[4725]: I1014 13:46:57.031142 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:57 crc kubenswrapper[4725]: I1014 13:46:57.031204 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f9f32163-993e-4610-8c70-aaae1d52dc40-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:57 crc kubenswrapper[4725]: I1014 13:46:57.032599 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f9f32163-993e-4610-8c70-aaae1d52dc40-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:57 crc kubenswrapper[4725]: I1014 13:46:57.038073 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssw4p\" (UniqueName: \"kubernetes.io/projected/f9f32163-993e-4610-8c70-aaae1d52dc40-kube-api-access-ssw4p\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:57 crc kubenswrapper[4725]: I1014 13:46:57.093641 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:46:57 crc kubenswrapper[4725]: W1014 13:46:57.472583 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9f32163_993e_4610_8c70_aaae1d52dc40.slice/crio-e92875dc65554fe0834966081f775b5507867b612b01e0701bfbbcdd17a37920 WatchSource:0}: Error finding container e92875dc65554fe0834966081f775b5507867b612b01e0701bfbbcdd17a37920: Status 404 returned error can't find the container with id e92875dc65554fe0834966081f775b5507867b612b01e0701bfbbcdd17a37920 Oct 14 13:46:57 crc kubenswrapper[4725]: I1014 13:46:57.473756 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m"] Oct 14 13:46:57 crc kubenswrapper[4725]: I1014 13:46:57.668409 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" event={"ID":"f9f32163-993e-4610-8c70-aaae1d52dc40","Type":"ContainerStarted","Data":"e92875dc65554fe0834966081f775b5507867b612b01e0701bfbbcdd17a37920"} Oct 14 13:46:58 crc kubenswrapper[4725]: I1014 13:46:58.686221 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" event={"ID":"f9f32163-993e-4610-8c70-aaae1d52dc40","Type":"ContainerStarted","Data":"5b9fe030b16c58db6df0be34d5a9355d4e7b043ef86aa99c97a64c6ad4ced24e"} Oct 14 13:46:58 crc kubenswrapper[4725]: I1014 13:46:58.729816 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" podStartSLOduration=2.243477236 podStartE2EDuration="2.729788519s" podCreationTimestamp="2025-10-14 13:46:56 +0000 UTC" firstStartedPulling="2025-10-14 13:46:57.476302172 +0000 UTC m=+1934.324736981" lastFinishedPulling="2025-10-14 13:46:57.962613445 +0000 UTC m=+1934.811048264" observedRunningTime="2025-10-14 13:46:58.714797484 +0000 UTC m=+1935.563232333" watchObservedRunningTime="2025-10-14 13:46:58.729788519 +0000 UTC m=+1935.578223368" Oct 14 13:47:36 crc kubenswrapper[4725]: I1014 13:47:36.036797 4725 generic.go:334] "Generic (PLEG): container finished" podID="f9f32163-993e-4610-8c70-aaae1d52dc40" containerID="5b9fe030b16c58db6df0be34d5a9355d4e7b043ef86aa99c97a64c6ad4ced24e" exitCode=0 Oct 14 13:47:36 crc kubenswrapper[4725]: I1014 13:47:36.036901 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" event={"ID":"f9f32163-993e-4610-8c70-aaae1d52dc40","Type":"ContainerDied","Data":"5b9fe030b16c58db6df0be34d5a9355d4e7b043ef86aa99c97a64c6ad4ced24e"} Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.526421 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.657438 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f9f32163-993e-4610-8c70-aaae1d52dc40-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"f9f32163-993e-4610-8c70-aaae1d52dc40\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.657795 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f9f32163-993e-4610-8c70-aaae1d52dc40-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"f9f32163-993e-4610-8c70-aaae1d52dc40\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.657834 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f9f32163-993e-4610-8c70-aaae1d52dc40-openstack-edpm-ipam-ovn-default-certs-0\") pod \"f9f32163-993e-4610-8c70-aaae1d52dc40\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.657855 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssw4p\" (UniqueName: \"kubernetes.io/projected/f9f32163-993e-4610-8c70-aaae1d52dc40-kube-api-access-ssw4p\") pod \"f9f32163-993e-4610-8c70-aaae1d52dc40\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.657873 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-bootstrap-combined-ca-bundle\") pod \"f9f32163-993e-4610-8c70-aaae1d52dc40\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.657979 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-libvirt-combined-ca-bundle\") pod \"f9f32163-993e-4610-8c70-aaae1d52dc40\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.658042 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-repo-setup-combined-ca-bundle\") pod \"f9f32163-993e-4610-8c70-aaae1d52dc40\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.658067 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-ssh-key\") pod \"f9f32163-993e-4610-8c70-aaae1d52dc40\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.658109 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-telemetry-combined-ca-bundle\") pod \"f9f32163-993e-4610-8c70-aaae1d52dc40\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.658131 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-inventory\") pod \"f9f32163-993e-4610-8c70-aaae1d52dc40\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.658162 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f9f32163-993e-4610-8c70-aaae1d52dc40-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"f9f32163-993e-4610-8c70-aaae1d52dc40\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.658181 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-neutron-metadata-combined-ca-bundle\") pod \"f9f32163-993e-4610-8c70-aaae1d52dc40\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.658207 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-nova-combined-ca-bundle\") pod \"f9f32163-993e-4610-8c70-aaae1d52dc40\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.658257 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-ovn-combined-ca-bundle\") pod \"f9f32163-993e-4610-8c70-aaae1d52dc40\" (UID: \"f9f32163-993e-4610-8c70-aaae1d52dc40\") " Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.664941 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9f32163-993e-4610-8c70-aaae1d52dc40-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "f9f32163-993e-4610-8c70-aaae1d52dc40" (UID: "f9f32163-993e-4610-8c70-aaae1d52dc40"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.665889 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "f9f32163-993e-4610-8c70-aaae1d52dc40" (UID: "f9f32163-993e-4610-8c70-aaae1d52dc40"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.666326 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9f32163-993e-4610-8c70-aaae1d52dc40-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "f9f32163-993e-4610-8c70-aaae1d52dc40" (UID: "f9f32163-993e-4610-8c70-aaae1d52dc40"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.666365 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9f32163-993e-4610-8c70-aaae1d52dc40-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "f9f32163-993e-4610-8c70-aaae1d52dc40" (UID: "f9f32163-993e-4610-8c70-aaae1d52dc40"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.666752 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "f9f32163-993e-4610-8c70-aaae1d52dc40" (UID: "f9f32163-993e-4610-8c70-aaae1d52dc40"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.666951 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9f32163-993e-4610-8c70-aaae1d52dc40-kube-api-access-ssw4p" (OuterVolumeSpecName: "kube-api-access-ssw4p") pod "f9f32163-993e-4610-8c70-aaae1d52dc40" (UID: "f9f32163-993e-4610-8c70-aaae1d52dc40"). InnerVolumeSpecName "kube-api-access-ssw4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.667128 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9f32163-993e-4610-8c70-aaae1d52dc40-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "f9f32163-993e-4610-8c70-aaae1d52dc40" (UID: "f9f32163-993e-4610-8c70-aaae1d52dc40"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.667552 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "f9f32163-993e-4610-8c70-aaae1d52dc40" (UID: "f9f32163-993e-4610-8c70-aaae1d52dc40"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.668578 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "f9f32163-993e-4610-8c70-aaae1d52dc40" (UID: "f9f32163-993e-4610-8c70-aaae1d52dc40"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.670417 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "f9f32163-993e-4610-8c70-aaae1d52dc40" (UID: "f9f32163-993e-4610-8c70-aaae1d52dc40"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.671944 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "f9f32163-993e-4610-8c70-aaae1d52dc40" (UID: "f9f32163-993e-4610-8c70-aaae1d52dc40"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.685022 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "f9f32163-993e-4610-8c70-aaae1d52dc40" (UID: "f9f32163-993e-4610-8c70-aaae1d52dc40"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.699037 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f9f32163-993e-4610-8c70-aaae1d52dc40" (UID: "f9f32163-993e-4610-8c70-aaae1d52dc40"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.704693 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-inventory" (OuterVolumeSpecName: "inventory") pod "f9f32163-993e-4610-8c70-aaae1d52dc40" (UID: "f9f32163-993e-4610-8c70-aaae1d52dc40"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.761071 4725 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.761110 4725 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.761121 4725 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f9f32163-993e-4610-8c70-aaae1d52dc40-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.761133 4725 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f9f32163-993e-4610-8c70-aaae1d52dc40-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.761144 4725 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f9f32163-993e-4610-8c70-aaae1d52dc40-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.761156 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssw4p\" (UniqueName: \"kubernetes.io/projected/f9f32163-993e-4610-8c70-aaae1d52dc40-kube-api-access-ssw4p\") on node \"crc\" DevicePath \"\"" Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.761165 4725 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.761174 4725 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.761183 4725 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.761192 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.761200 4725 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.761208 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.761217 4725 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/f9f32163-993e-4610-8c70-aaae1d52dc40-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 14 13:47:37 crc kubenswrapper[4725]: I1014 13:47:37.761247 4725 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f32163-993e-4610-8c70-aaae1d52dc40-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:47:38 crc kubenswrapper[4725]: I1014 13:47:38.055576 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" event={"ID":"f9f32163-993e-4610-8c70-aaae1d52dc40","Type":"ContainerDied","Data":"e92875dc65554fe0834966081f775b5507867b612b01e0701bfbbcdd17a37920"} Oct 14 13:47:38 crc kubenswrapper[4725]: I1014 13:47:38.055612 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e92875dc65554fe0834966081f775b5507867b612b01e0701bfbbcdd17a37920" Oct 14 13:47:38 crc kubenswrapper[4725]: I1014 13:47:38.055656 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m" Oct 14 13:47:38 crc kubenswrapper[4725]: I1014 13:47:38.161276 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-th8rg"] Oct 14 13:47:38 crc kubenswrapper[4725]: E1014 13:47:38.161683 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9f32163-993e-4610-8c70-aaae1d52dc40" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 14 13:47:38 crc kubenswrapper[4725]: I1014 13:47:38.161700 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9f32163-993e-4610-8c70-aaae1d52dc40" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 14 13:47:38 crc kubenswrapper[4725]: I1014 13:47:38.161877 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9f32163-993e-4610-8c70-aaae1d52dc40" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 14 13:47:38 crc kubenswrapper[4725]: I1014 13:47:38.162469 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-th8rg" Oct 14 13:47:38 crc kubenswrapper[4725]: I1014 13:47:38.169002 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 13:47:38 crc kubenswrapper[4725]: I1014 13:47:38.169002 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 13:47:38 crc kubenswrapper[4725]: I1014 13:47:38.169336 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Oct 14 13:47:38 crc kubenswrapper[4725]: I1014 13:47:38.169577 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qnvbv" Oct 14 13:47:38 crc kubenswrapper[4725]: I1014 13:47:38.169657 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 13:47:38 crc kubenswrapper[4725]: I1014 13:47:38.178567 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-th8rg"] Oct 14 13:47:38 crc kubenswrapper[4725]: I1014 13:47:38.268863 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mbkh\" (UniqueName: \"kubernetes.io/projected/e7889f82-d1de-4040-a197-5444bf9951c6-kube-api-access-5mbkh\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-th8rg\" (UID: \"e7889f82-d1de-4040-a197-5444bf9951c6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-th8rg" Oct 14 13:47:38 crc kubenswrapper[4725]: I1014 13:47:38.269017 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e7889f82-d1de-4040-a197-5444bf9951c6-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-th8rg\" (UID: \"e7889f82-d1de-4040-a197-5444bf9951c6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-th8rg" Oct 14 13:47:38 crc kubenswrapper[4725]: I1014 13:47:38.269048 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7889f82-d1de-4040-a197-5444bf9951c6-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-th8rg\" (UID: \"e7889f82-d1de-4040-a197-5444bf9951c6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-th8rg" Oct 14 13:47:38 crc kubenswrapper[4725]: I1014 13:47:38.269133 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e7889f82-d1de-4040-a197-5444bf9951c6-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-th8rg\" (UID: \"e7889f82-d1de-4040-a197-5444bf9951c6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-th8rg" Oct 14 13:47:38 crc kubenswrapper[4725]: I1014 13:47:38.269193 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7889f82-d1de-4040-a197-5444bf9951c6-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-th8rg\" (UID: \"e7889f82-d1de-4040-a197-5444bf9951c6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-th8rg" Oct 14 13:47:38 crc kubenswrapper[4725]: I1014 13:47:38.370369 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e7889f82-d1de-4040-a197-5444bf9951c6-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-th8rg\" (UID: \"e7889f82-d1de-4040-a197-5444bf9951c6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-th8rg" Oct 14 13:47:38 crc kubenswrapper[4725]: I1014 13:47:38.370473 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7889f82-d1de-4040-a197-5444bf9951c6-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-th8rg\" (UID: \"e7889f82-d1de-4040-a197-5444bf9951c6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-th8rg" Oct 14 13:47:38 crc kubenswrapper[4725]: I1014 13:47:38.370502 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mbkh\" (UniqueName: \"kubernetes.io/projected/e7889f82-d1de-4040-a197-5444bf9951c6-kube-api-access-5mbkh\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-th8rg\" (UID: \"e7889f82-d1de-4040-a197-5444bf9951c6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-th8rg" Oct 14 13:47:38 crc kubenswrapper[4725]: I1014 13:47:38.370579 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e7889f82-d1de-4040-a197-5444bf9951c6-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-th8rg\" (UID: \"e7889f82-d1de-4040-a197-5444bf9951c6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-th8rg" Oct 14 13:47:38 crc kubenswrapper[4725]: I1014 13:47:38.370599 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7889f82-d1de-4040-a197-5444bf9951c6-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-th8rg\" (UID: \"e7889f82-d1de-4040-a197-5444bf9951c6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-th8rg" Oct 14 13:47:38 crc kubenswrapper[4725]: I1014 13:47:38.371518 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e7889f82-d1de-4040-a197-5444bf9951c6-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-th8rg\" (UID: \"e7889f82-d1de-4040-a197-5444bf9951c6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-th8rg" Oct 14 13:47:38 crc kubenswrapper[4725]: I1014 13:47:38.374689 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7889f82-d1de-4040-a197-5444bf9951c6-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-th8rg\" (UID: \"e7889f82-d1de-4040-a197-5444bf9951c6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-th8rg" Oct 14 13:47:38 crc kubenswrapper[4725]: I1014 13:47:38.374902 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7889f82-d1de-4040-a197-5444bf9951c6-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-th8rg\" (UID: \"e7889f82-d1de-4040-a197-5444bf9951c6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-th8rg" Oct 14 13:47:38 crc kubenswrapper[4725]: I1014 13:47:38.384194 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e7889f82-d1de-4040-a197-5444bf9951c6-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-th8rg\" (UID: \"e7889f82-d1de-4040-a197-5444bf9951c6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-th8rg" Oct 14 13:47:38 crc kubenswrapper[4725]: I1014 13:47:38.391546 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mbkh\" (UniqueName: \"kubernetes.io/projected/e7889f82-d1de-4040-a197-5444bf9951c6-kube-api-access-5mbkh\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-th8rg\" (UID: \"e7889f82-d1de-4040-a197-5444bf9951c6\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-th8rg" Oct 14 13:47:38 crc kubenswrapper[4725]: I1014 13:47:38.476764 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-th8rg" Oct 14 13:47:39 crc kubenswrapper[4725]: I1014 13:47:39.002933 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-th8rg"] Oct 14 13:47:39 crc kubenswrapper[4725]: I1014 13:47:39.006432 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 13:47:39 crc kubenswrapper[4725]: I1014 13:47:39.066909 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-th8rg" event={"ID":"e7889f82-d1de-4040-a197-5444bf9951c6","Type":"ContainerStarted","Data":"527f912bf47b134914efef61868edab7d10dff0b975ee7c8ff3e773fe9e91c94"} Oct 14 13:47:40 crc kubenswrapper[4725]: I1014 13:47:40.080644 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-th8rg" event={"ID":"e7889f82-d1de-4040-a197-5444bf9951c6","Type":"ContainerStarted","Data":"a5fbddc42df8efbc7420e7c94592cb14b3e140ed84e3cce78be693c1b490860c"} Oct 14 13:47:40 crc kubenswrapper[4725]: I1014 13:47:40.096898 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-th8rg" podStartSLOduration=1.511380328 podStartE2EDuration="2.096882334s" podCreationTimestamp="2025-10-14 13:47:38 +0000 UTC" firstStartedPulling="2025-10-14 13:47:39.006041473 +0000 UTC m=+1975.854476282" lastFinishedPulling="2025-10-14 13:47:39.591543469 +0000 UTC m=+1976.439978288" observedRunningTime="2025-10-14 13:47:40.095562939 +0000 UTC m=+1976.943997778" watchObservedRunningTime="2025-10-14 13:47:40.096882334 +0000 UTC m=+1976.945317143" Oct 14 13:48:02 crc kubenswrapper[4725]: I1014 13:48:02.521132 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 13:48:02 crc kubenswrapper[4725]: I1014 13:48:02.522956 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 13:48:32 crc kubenswrapper[4725]: I1014 13:48:32.521338 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 13:48:32 crc kubenswrapper[4725]: I1014 13:48:32.521939 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 13:48:44 crc kubenswrapper[4725]: I1014 13:48:44.775398 4725 generic.go:334] "Generic (PLEG): container finished" podID="e7889f82-d1de-4040-a197-5444bf9951c6" containerID="a5fbddc42df8efbc7420e7c94592cb14b3e140ed84e3cce78be693c1b490860c" exitCode=0 Oct 14 13:48:44 crc kubenswrapper[4725]: I1014 13:48:44.775683 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-th8rg" event={"ID":"e7889f82-d1de-4040-a197-5444bf9951c6","Type":"ContainerDied","Data":"a5fbddc42df8efbc7420e7c94592cb14b3e140ed84e3cce78be693c1b490860c"} Oct 14 13:48:46 crc kubenswrapper[4725]: I1014 13:48:46.362627 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-th8rg" Oct 14 13:48:46 crc kubenswrapper[4725]: I1014 13:48:46.451957 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7889f82-d1de-4040-a197-5444bf9951c6-inventory\") pod \"e7889f82-d1de-4040-a197-5444bf9951c6\" (UID: \"e7889f82-d1de-4040-a197-5444bf9951c6\") " Oct 14 13:48:46 crc kubenswrapper[4725]: I1014 13:48:46.452064 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e7889f82-d1de-4040-a197-5444bf9951c6-ssh-key\") pod \"e7889f82-d1de-4040-a197-5444bf9951c6\" (UID: \"e7889f82-d1de-4040-a197-5444bf9951c6\") " Oct 14 13:48:46 crc kubenswrapper[4725]: I1014 13:48:46.452250 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mbkh\" (UniqueName: \"kubernetes.io/projected/e7889f82-d1de-4040-a197-5444bf9951c6-kube-api-access-5mbkh\") pod \"e7889f82-d1de-4040-a197-5444bf9951c6\" (UID: \"e7889f82-d1de-4040-a197-5444bf9951c6\") " Oct 14 13:48:46 crc kubenswrapper[4725]: I1014 13:48:46.452323 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e7889f82-d1de-4040-a197-5444bf9951c6-ovncontroller-config-0\") pod \"e7889f82-d1de-4040-a197-5444bf9951c6\" (UID: \"e7889f82-d1de-4040-a197-5444bf9951c6\") " Oct 14 13:48:46 crc kubenswrapper[4725]: I1014 13:48:46.452346 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7889f82-d1de-4040-a197-5444bf9951c6-ovn-combined-ca-bundle\") pod \"e7889f82-d1de-4040-a197-5444bf9951c6\" (UID: \"e7889f82-d1de-4040-a197-5444bf9951c6\") " Oct 14 13:48:46 crc kubenswrapper[4725]: I1014 13:48:46.458953 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7889f82-d1de-4040-a197-5444bf9951c6-kube-api-access-5mbkh" (OuterVolumeSpecName: "kube-api-access-5mbkh") pod "e7889f82-d1de-4040-a197-5444bf9951c6" (UID: "e7889f82-d1de-4040-a197-5444bf9951c6"). InnerVolumeSpecName "kube-api-access-5mbkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:48:46 crc kubenswrapper[4725]: I1014 13:48:46.459758 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7889f82-d1de-4040-a197-5444bf9951c6-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "e7889f82-d1de-4040-a197-5444bf9951c6" (UID: "e7889f82-d1de-4040-a197-5444bf9951c6"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:48:46 crc kubenswrapper[4725]: I1014 13:48:46.494901 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7889f82-d1de-4040-a197-5444bf9951c6-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "e7889f82-d1de-4040-a197-5444bf9951c6" (UID: "e7889f82-d1de-4040-a197-5444bf9951c6"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:48:46 crc kubenswrapper[4725]: I1014 13:48:46.502521 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7889f82-d1de-4040-a197-5444bf9951c6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e7889f82-d1de-4040-a197-5444bf9951c6" (UID: "e7889f82-d1de-4040-a197-5444bf9951c6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:48:46 crc kubenswrapper[4725]: I1014 13:48:46.507686 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7889f82-d1de-4040-a197-5444bf9951c6-inventory" (OuterVolumeSpecName: "inventory") pod "e7889f82-d1de-4040-a197-5444bf9951c6" (UID: "e7889f82-d1de-4040-a197-5444bf9951c6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:48:46 crc kubenswrapper[4725]: I1014 13:48:46.555329 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mbkh\" (UniqueName: \"kubernetes.io/projected/e7889f82-d1de-4040-a197-5444bf9951c6-kube-api-access-5mbkh\") on node \"crc\" DevicePath \"\"" Oct 14 13:48:46 crc kubenswrapper[4725]: I1014 13:48:46.555373 4725 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e7889f82-d1de-4040-a197-5444bf9951c6-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Oct 14 13:48:46 crc kubenswrapper[4725]: I1014 13:48:46.555386 4725 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7889f82-d1de-4040-a197-5444bf9951c6-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:48:46 crc kubenswrapper[4725]: I1014 13:48:46.555400 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e7889f82-d1de-4040-a197-5444bf9951c6-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 13:48:46 crc kubenswrapper[4725]: I1014 13:48:46.555412 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e7889f82-d1de-4040-a197-5444bf9951c6-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 13:48:46 crc kubenswrapper[4725]: I1014 13:48:46.807019 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-th8rg" event={"ID":"e7889f82-d1de-4040-a197-5444bf9951c6","Type":"ContainerDied","Data":"527f912bf47b134914efef61868edab7d10dff0b975ee7c8ff3e773fe9e91c94"} Oct 14 13:48:46 crc kubenswrapper[4725]: I1014 13:48:46.807356 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="527f912bf47b134914efef61868edab7d10dff0b975ee7c8ff3e773fe9e91c94" Oct 14 13:48:46 crc kubenswrapper[4725]: I1014 13:48:46.807122 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-th8rg" Oct 14 13:48:46 crc kubenswrapper[4725]: I1014 13:48:46.948576 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74"] Oct 14 13:48:46 crc kubenswrapper[4725]: E1014 13:48:46.949147 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7889f82-d1de-4040-a197-5444bf9951c6" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 14 13:48:46 crc kubenswrapper[4725]: I1014 13:48:46.949178 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7889f82-d1de-4040-a197-5444bf9951c6" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 14 13:48:46 crc kubenswrapper[4725]: I1014 13:48:46.949480 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7889f82-d1de-4040-a197-5444bf9951c6" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 14 13:48:46 crc kubenswrapper[4725]: I1014 13:48:46.950176 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74" Oct 14 13:48:46 crc kubenswrapper[4725]: I1014 13:48:46.955648 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Oct 14 13:48:46 crc kubenswrapper[4725]: I1014 13:48:46.955795 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 13:48:46 crc kubenswrapper[4725]: I1014 13:48:46.955831 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Oct 14 13:48:46 crc kubenswrapper[4725]: I1014 13:48:46.955883 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 13:48:46 crc kubenswrapper[4725]: I1014 13:48:46.955887 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 13:48:46 crc kubenswrapper[4725]: I1014 13:48:46.955904 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qnvbv" Oct 14 13:48:46 crc kubenswrapper[4725]: I1014 13:48:46.963903 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74"] Oct 14 13:48:47 crc kubenswrapper[4725]: I1014 13:48:47.069926 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/017a8a26-0804-4e78-972b-e74224e16f72-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74\" (UID: \"017a8a26-0804-4e78-972b-e74224e16f72\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74" Oct 14 13:48:47 crc kubenswrapper[4725]: I1014 13:48:47.069982 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drzkl\" (UniqueName: \"kubernetes.io/projected/017a8a26-0804-4e78-972b-e74224e16f72-kube-api-access-drzkl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74\" (UID: \"017a8a26-0804-4e78-972b-e74224e16f72\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74" Oct 14 13:48:47 crc kubenswrapper[4725]: I1014 13:48:47.070059 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/017a8a26-0804-4e78-972b-e74224e16f72-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74\" (UID: \"017a8a26-0804-4e78-972b-e74224e16f72\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74" Oct 14 13:48:47 crc kubenswrapper[4725]: I1014 13:48:47.070106 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/017a8a26-0804-4e78-972b-e74224e16f72-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74\" (UID: \"017a8a26-0804-4e78-972b-e74224e16f72\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74" Oct 14 13:48:47 crc kubenswrapper[4725]: I1014 13:48:47.070134 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/017a8a26-0804-4e78-972b-e74224e16f72-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74\" (UID: \"017a8a26-0804-4e78-972b-e74224e16f72\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74" Oct 14 13:48:47 crc kubenswrapper[4725]: I1014 13:48:47.070318 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/017a8a26-0804-4e78-972b-e74224e16f72-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74\" (UID: \"017a8a26-0804-4e78-972b-e74224e16f72\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74" Oct 14 13:48:47 crc kubenswrapper[4725]: I1014 13:48:47.172192 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/017a8a26-0804-4e78-972b-e74224e16f72-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74\" (UID: \"017a8a26-0804-4e78-972b-e74224e16f72\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74" Oct 14 13:48:47 crc kubenswrapper[4725]: I1014 13:48:47.172334 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/017a8a26-0804-4e78-972b-e74224e16f72-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74\" (UID: \"017a8a26-0804-4e78-972b-e74224e16f72\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74" Oct 14 13:48:47 crc kubenswrapper[4725]: I1014 13:48:47.172388 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drzkl\" (UniqueName: \"kubernetes.io/projected/017a8a26-0804-4e78-972b-e74224e16f72-kube-api-access-drzkl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74\" (UID: \"017a8a26-0804-4e78-972b-e74224e16f72\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74" Oct 14 13:48:47 crc kubenswrapper[4725]: I1014 13:48:47.172524 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/017a8a26-0804-4e78-972b-e74224e16f72-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74\" (UID: \"017a8a26-0804-4e78-972b-e74224e16f72\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74" Oct 14 13:48:47 crc kubenswrapper[4725]: I1014 13:48:47.172594 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/017a8a26-0804-4e78-972b-e74224e16f72-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74\" (UID: \"017a8a26-0804-4e78-972b-e74224e16f72\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74" Oct 14 13:48:47 crc kubenswrapper[4725]: I1014 13:48:47.172639 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/017a8a26-0804-4e78-972b-e74224e16f72-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74\" (UID: \"017a8a26-0804-4e78-972b-e74224e16f72\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74" Oct 14 13:48:47 crc kubenswrapper[4725]: I1014 13:48:47.178635 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/017a8a26-0804-4e78-972b-e74224e16f72-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74\" (UID: \"017a8a26-0804-4e78-972b-e74224e16f72\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74" Oct 14 13:48:47 crc kubenswrapper[4725]: I1014 13:48:47.178692 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/017a8a26-0804-4e78-972b-e74224e16f72-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74\" (UID: \"017a8a26-0804-4e78-972b-e74224e16f72\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74" Oct 14 13:48:47 crc kubenswrapper[4725]: I1014 13:48:47.178741 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/017a8a26-0804-4e78-972b-e74224e16f72-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74\" (UID: \"017a8a26-0804-4e78-972b-e74224e16f72\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74" Oct 14 13:48:47 crc kubenswrapper[4725]: I1014 13:48:47.179245 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/017a8a26-0804-4e78-972b-e74224e16f72-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74\" (UID: \"017a8a26-0804-4e78-972b-e74224e16f72\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74" Oct 14 13:48:47 crc kubenswrapper[4725]: I1014 13:48:47.179499 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/017a8a26-0804-4e78-972b-e74224e16f72-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74\" (UID: \"017a8a26-0804-4e78-972b-e74224e16f72\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74" Oct 14 13:48:47 crc kubenswrapper[4725]: I1014 13:48:47.204192 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drzkl\" (UniqueName: \"kubernetes.io/projected/017a8a26-0804-4e78-972b-e74224e16f72-kube-api-access-drzkl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74\" (UID: \"017a8a26-0804-4e78-972b-e74224e16f72\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74" Oct 14 13:48:47 crc kubenswrapper[4725]: I1014 13:48:47.278521 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74" Oct 14 13:48:47 crc kubenswrapper[4725]: W1014 13:48:47.841342 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod017a8a26_0804_4e78_972b_e74224e16f72.slice/crio-9b015e21d9276338eb2e208b3964f0a75ba7e79fec644b2111ed2fc9783e4a65 WatchSource:0}: Error finding container 9b015e21d9276338eb2e208b3964f0a75ba7e79fec644b2111ed2fc9783e4a65: Status 404 returned error can't find the container with id 9b015e21d9276338eb2e208b3964f0a75ba7e79fec644b2111ed2fc9783e4a65 Oct 14 13:48:47 crc kubenswrapper[4725]: I1014 13:48:47.841358 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74"] Oct 14 13:48:48 crc kubenswrapper[4725]: I1014 13:48:48.163114 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p72hd"] Oct 14 13:48:48 crc kubenswrapper[4725]: I1014 13:48:48.169215 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p72hd" Oct 14 13:48:48 crc kubenswrapper[4725]: I1014 13:48:48.181960 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p72hd"] Oct 14 13:48:48 crc kubenswrapper[4725]: I1014 13:48:48.190757 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faedb76b-4c8e-4eff-ad64-cd507a1447b9-utilities\") pod \"redhat-marketplace-p72hd\" (UID: \"faedb76b-4c8e-4eff-ad64-cd507a1447b9\") " pod="openshift-marketplace/redhat-marketplace-p72hd" Oct 14 13:48:48 crc kubenswrapper[4725]: I1014 13:48:48.191071 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlrnb\" (UniqueName: \"kubernetes.io/projected/faedb76b-4c8e-4eff-ad64-cd507a1447b9-kube-api-access-nlrnb\") pod \"redhat-marketplace-p72hd\" (UID: \"faedb76b-4c8e-4eff-ad64-cd507a1447b9\") " pod="openshift-marketplace/redhat-marketplace-p72hd" Oct 14 13:48:48 crc kubenswrapper[4725]: I1014 13:48:48.191261 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faedb76b-4c8e-4eff-ad64-cd507a1447b9-catalog-content\") pod \"redhat-marketplace-p72hd\" (UID: \"faedb76b-4c8e-4eff-ad64-cd507a1447b9\") " pod="openshift-marketplace/redhat-marketplace-p72hd" Oct 14 13:48:48 crc kubenswrapper[4725]: I1014 13:48:48.292944 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faedb76b-4c8e-4eff-ad64-cd507a1447b9-catalog-content\") pod \"redhat-marketplace-p72hd\" (UID: \"faedb76b-4c8e-4eff-ad64-cd507a1447b9\") " pod="openshift-marketplace/redhat-marketplace-p72hd" Oct 14 13:48:48 crc kubenswrapper[4725]: I1014 13:48:48.293038 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faedb76b-4c8e-4eff-ad64-cd507a1447b9-utilities\") pod \"redhat-marketplace-p72hd\" (UID: \"faedb76b-4c8e-4eff-ad64-cd507a1447b9\") " pod="openshift-marketplace/redhat-marketplace-p72hd" Oct 14 13:48:48 crc kubenswrapper[4725]: I1014 13:48:48.293062 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlrnb\" (UniqueName: \"kubernetes.io/projected/faedb76b-4c8e-4eff-ad64-cd507a1447b9-kube-api-access-nlrnb\") pod \"redhat-marketplace-p72hd\" (UID: \"faedb76b-4c8e-4eff-ad64-cd507a1447b9\") " pod="openshift-marketplace/redhat-marketplace-p72hd" Oct 14 13:48:48 crc kubenswrapper[4725]: I1014 13:48:48.294022 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faedb76b-4c8e-4eff-ad64-cd507a1447b9-catalog-content\") pod \"redhat-marketplace-p72hd\" (UID: \"faedb76b-4c8e-4eff-ad64-cd507a1447b9\") " pod="openshift-marketplace/redhat-marketplace-p72hd" Oct 14 13:48:48 crc kubenswrapper[4725]: I1014 13:48:48.294496 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faedb76b-4c8e-4eff-ad64-cd507a1447b9-utilities\") pod \"redhat-marketplace-p72hd\" (UID: \"faedb76b-4c8e-4eff-ad64-cd507a1447b9\") " pod="openshift-marketplace/redhat-marketplace-p72hd" Oct 14 13:48:48 crc kubenswrapper[4725]: I1014 13:48:48.312081 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlrnb\" (UniqueName: \"kubernetes.io/projected/faedb76b-4c8e-4eff-ad64-cd507a1447b9-kube-api-access-nlrnb\") pod \"redhat-marketplace-p72hd\" (UID: \"faedb76b-4c8e-4eff-ad64-cd507a1447b9\") " pod="openshift-marketplace/redhat-marketplace-p72hd" Oct 14 13:48:48 crc kubenswrapper[4725]: I1014 13:48:48.496691 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p72hd" Oct 14 13:48:48 crc kubenswrapper[4725]: I1014 13:48:48.828155 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74" event={"ID":"017a8a26-0804-4e78-972b-e74224e16f72","Type":"ContainerStarted","Data":"f150685d3a0a649e9aa9e808985faa8b158383b3c59da572fd601731f4b5dab5"} Oct 14 13:48:48 crc kubenswrapper[4725]: I1014 13:48:48.828443 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74" event={"ID":"017a8a26-0804-4e78-972b-e74224e16f72","Type":"ContainerStarted","Data":"9b015e21d9276338eb2e208b3964f0a75ba7e79fec644b2111ed2fc9783e4a65"} Oct 14 13:48:48 crc kubenswrapper[4725]: I1014 13:48:48.853774 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74" podStartSLOduration=2.392900081 podStartE2EDuration="2.853755466s" podCreationTimestamp="2025-10-14 13:48:46 +0000 UTC" firstStartedPulling="2025-10-14 13:48:47.84319724 +0000 UTC m=+2044.691632049" lastFinishedPulling="2025-10-14 13:48:48.304052625 +0000 UTC m=+2045.152487434" observedRunningTime="2025-10-14 13:48:48.85279317 +0000 UTC m=+2045.701227999" watchObservedRunningTime="2025-10-14 13:48:48.853755466 +0000 UTC m=+2045.702190265" Oct 14 13:48:48 crc kubenswrapper[4725]: I1014 13:48:48.926871 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p72hd"] Oct 14 13:48:48 crc kubenswrapper[4725]: W1014 13:48:48.930891 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfaedb76b_4c8e_4eff_ad64_cd507a1447b9.slice/crio-107820db9a1eebdea5351325ca4bdcd828bcc920020e3e3d7ed3efd00401d315 WatchSource:0}: Error finding container 107820db9a1eebdea5351325ca4bdcd828bcc920020e3e3d7ed3efd00401d315: Status 404 returned error can't find the container with id 107820db9a1eebdea5351325ca4bdcd828bcc920020e3e3d7ed3efd00401d315 Oct 14 13:48:49 crc kubenswrapper[4725]: I1014 13:48:49.838365 4725 generic.go:334] "Generic (PLEG): container finished" podID="faedb76b-4c8e-4eff-ad64-cd507a1447b9" containerID="2634fd5b77455d5f81e6c0bdcfc2610bb46b0de0f1ef7de87a57c31c28622cf7" exitCode=0 Oct 14 13:48:49 crc kubenswrapper[4725]: I1014 13:48:49.838425 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p72hd" event={"ID":"faedb76b-4c8e-4eff-ad64-cd507a1447b9","Type":"ContainerDied","Data":"2634fd5b77455d5f81e6c0bdcfc2610bb46b0de0f1ef7de87a57c31c28622cf7"} Oct 14 13:48:49 crc kubenswrapper[4725]: I1014 13:48:49.838964 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p72hd" event={"ID":"faedb76b-4c8e-4eff-ad64-cd507a1447b9","Type":"ContainerStarted","Data":"107820db9a1eebdea5351325ca4bdcd828bcc920020e3e3d7ed3efd00401d315"} Oct 14 13:48:51 crc kubenswrapper[4725]: I1014 13:48:51.876494 4725 generic.go:334] "Generic (PLEG): container finished" podID="faedb76b-4c8e-4eff-ad64-cd507a1447b9" containerID="101659ac1ec495dbacc4b16f994ae41124e506b032358501f19e5ff3066174ff" exitCode=0 Oct 14 13:48:51 crc kubenswrapper[4725]: I1014 13:48:51.876610 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p72hd" event={"ID":"faedb76b-4c8e-4eff-ad64-cd507a1447b9","Type":"ContainerDied","Data":"101659ac1ec495dbacc4b16f994ae41124e506b032358501f19e5ff3066174ff"} Oct 14 13:48:52 crc kubenswrapper[4725]: I1014 13:48:52.889500 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p72hd" event={"ID":"faedb76b-4c8e-4eff-ad64-cd507a1447b9","Type":"ContainerStarted","Data":"17f9290f98f923c95eb3ca5cb2db9662dbc112e585a961bdaa4113dcb3d644c6"} Oct 14 13:48:52 crc kubenswrapper[4725]: I1014 13:48:52.926959 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p72hd" podStartSLOduration=2.428231876 podStartE2EDuration="4.926935876s" podCreationTimestamp="2025-10-14 13:48:48 +0000 UTC" firstStartedPulling="2025-10-14 13:48:49.840998034 +0000 UTC m=+2046.689432843" lastFinishedPulling="2025-10-14 13:48:52.339702024 +0000 UTC m=+2049.188136843" observedRunningTime="2025-10-14 13:48:52.912347773 +0000 UTC m=+2049.760782582" watchObservedRunningTime="2025-10-14 13:48:52.926935876 +0000 UTC m=+2049.775370685" Oct 14 13:48:57 crc kubenswrapper[4725]: I1014 13:48:57.189570 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tdfmh"] Oct 14 13:48:57 crc kubenswrapper[4725]: I1014 13:48:57.192588 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tdfmh" Oct 14 13:48:57 crc kubenswrapper[4725]: I1014 13:48:57.209254 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tdfmh"] Oct 14 13:48:57 crc kubenswrapper[4725]: I1014 13:48:57.265023 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2-utilities\") pod \"redhat-operators-tdfmh\" (UID: \"8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2\") " pod="openshift-marketplace/redhat-operators-tdfmh" Oct 14 13:48:57 crc kubenswrapper[4725]: I1014 13:48:57.265283 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpgd9\" (UniqueName: \"kubernetes.io/projected/8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2-kube-api-access-qpgd9\") pod \"redhat-operators-tdfmh\" (UID: \"8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2\") " pod="openshift-marketplace/redhat-operators-tdfmh" Oct 14 13:48:57 crc kubenswrapper[4725]: I1014 13:48:57.265671 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2-catalog-content\") pod \"redhat-operators-tdfmh\" (UID: \"8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2\") " pod="openshift-marketplace/redhat-operators-tdfmh" Oct 14 13:48:57 crc kubenswrapper[4725]: I1014 13:48:57.368152 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2-utilities\") pod \"redhat-operators-tdfmh\" (UID: \"8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2\") " pod="openshift-marketplace/redhat-operators-tdfmh" Oct 14 13:48:57 crc kubenswrapper[4725]: I1014 13:48:57.368233 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpgd9\" (UniqueName: \"kubernetes.io/projected/8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2-kube-api-access-qpgd9\") pod \"redhat-operators-tdfmh\" (UID: \"8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2\") " pod="openshift-marketplace/redhat-operators-tdfmh" Oct 14 13:48:57 crc kubenswrapper[4725]: I1014 13:48:57.368287 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2-catalog-content\") pod \"redhat-operators-tdfmh\" (UID: \"8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2\") " pod="openshift-marketplace/redhat-operators-tdfmh" Oct 14 13:48:57 crc kubenswrapper[4725]: I1014 13:48:57.368797 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2-utilities\") pod \"redhat-operators-tdfmh\" (UID: \"8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2\") " pod="openshift-marketplace/redhat-operators-tdfmh" Oct 14 13:48:57 crc kubenswrapper[4725]: I1014 13:48:57.368826 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2-catalog-content\") pod \"redhat-operators-tdfmh\" (UID: \"8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2\") " pod="openshift-marketplace/redhat-operators-tdfmh" Oct 14 13:48:57 crc kubenswrapper[4725]: I1014 13:48:57.387212 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpgd9\" (UniqueName: \"kubernetes.io/projected/8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2-kube-api-access-qpgd9\") pod \"redhat-operators-tdfmh\" (UID: \"8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2\") " pod="openshift-marketplace/redhat-operators-tdfmh" Oct 14 13:48:57 crc kubenswrapper[4725]: I1014 13:48:57.531891 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tdfmh" Oct 14 13:48:58 crc kubenswrapper[4725]: I1014 13:48:58.038356 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tdfmh"] Oct 14 13:48:58 crc kubenswrapper[4725]: W1014 13:48:58.041044 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8618f7d1_fe03_4f1c_b4b8_3bdde7c2cbe2.slice/crio-3e198e388dbe3e7553333d60cc2ce7b7aca58feec3046f373a17c0ebb156f0e0 WatchSource:0}: Error finding container 3e198e388dbe3e7553333d60cc2ce7b7aca58feec3046f373a17c0ebb156f0e0: Status 404 returned error can't find the container with id 3e198e388dbe3e7553333d60cc2ce7b7aca58feec3046f373a17c0ebb156f0e0 Oct 14 13:48:58 crc kubenswrapper[4725]: I1014 13:48:58.496995 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p72hd" Oct 14 13:48:58 crc kubenswrapper[4725]: I1014 13:48:58.497239 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p72hd" Oct 14 13:48:58 crc kubenswrapper[4725]: I1014 13:48:58.551446 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p72hd" Oct 14 13:48:58 crc kubenswrapper[4725]: I1014 13:48:58.951639 4725 generic.go:334] "Generic (PLEG): container finished" podID="8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2" containerID="dd88467bb3240ba225fd1f4686df1ea8bd53a0737fb21ba82f66f4baf175df63" exitCode=0 Oct 14 13:48:58 crc kubenswrapper[4725]: I1014 13:48:58.951740 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tdfmh" event={"ID":"8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2","Type":"ContainerDied","Data":"dd88467bb3240ba225fd1f4686df1ea8bd53a0737fb21ba82f66f4baf175df63"} Oct 14 13:48:58 crc kubenswrapper[4725]: I1014 13:48:58.951799 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tdfmh" event={"ID":"8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2","Type":"ContainerStarted","Data":"3e198e388dbe3e7553333d60cc2ce7b7aca58feec3046f373a17c0ebb156f0e0"} Oct 14 13:48:59 crc kubenswrapper[4725]: I1014 13:48:59.013813 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p72hd" Oct 14 13:49:00 crc kubenswrapper[4725]: I1014 13:49:00.974923 4725 generic.go:334] "Generic (PLEG): container finished" podID="8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2" containerID="89a8954342ecb80ed5567029d81892bea26052da3e3e1d6dadd17067e61c787e" exitCode=0 Oct 14 13:49:00 crc kubenswrapper[4725]: I1014 13:49:00.975016 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tdfmh" event={"ID":"8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2","Type":"ContainerDied","Data":"89a8954342ecb80ed5567029d81892bea26052da3e3e1d6dadd17067e61c787e"} Oct 14 13:49:00 crc kubenswrapper[4725]: I1014 13:49:00.977965 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p72hd"] Oct 14 13:49:00 crc kubenswrapper[4725]: I1014 13:49:00.978355 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p72hd" podUID="faedb76b-4c8e-4eff-ad64-cd507a1447b9" containerName="registry-server" containerID="cri-o://17f9290f98f923c95eb3ca5cb2db9662dbc112e585a961bdaa4113dcb3d644c6" gracePeriod=2 Oct 14 13:49:01 crc kubenswrapper[4725]: I1014 13:49:01.987946 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tdfmh" event={"ID":"8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2","Type":"ContainerStarted","Data":"ffc56b0523b63b6d429be01f396c64551afcd172fe4d0a484c1796b76e1d522b"} Oct 14 13:49:02 crc kubenswrapper[4725]: I1014 13:49:02.520940 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 13:49:02 crc kubenswrapper[4725]: I1014 13:49:02.521036 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 13:49:02 crc kubenswrapper[4725]: I1014 13:49:02.521105 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" Oct 14 13:49:02 crc kubenswrapper[4725]: I1014 13:49:02.522226 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b9f10e2afe51a3b5ed3de279d9f89815b29b05eb672bbd7405939a8324f43233"} pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 13:49:02 crc kubenswrapper[4725]: I1014 13:49:02.522337 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" containerID="cri-o://b9f10e2afe51a3b5ed3de279d9f89815b29b05eb672bbd7405939a8324f43233" gracePeriod=600 Oct 14 13:49:03 crc kubenswrapper[4725]: I1014 13:49:03.000068 4725 generic.go:334] "Generic (PLEG): container finished" podID="faedb76b-4c8e-4eff-ad64-cd507a1447b9" containerID="17f9290f98f923c95eb3ca5cb2db9662dbc112e585a961bdaa4113dcb3d644c6" exitCode=0 Oct 14 13:49:03 crc kubenswrapper[4725]: I1014 13:49:03.000148 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p72hd" event={"ID":"faedb76b-4c8e-4eff-ad64-cd507a1447b9","Type":"ContainerDied","Data":"17f9290f98f923c95eb3ca5cb2db9662dbc112e585a961bdaa4113dcb3d644c6"} Oct 14 13:49:03 crc kubenswrapper[4725]: I1014 13:49:03.003104 4725 generic.go:334] "Generic (PLEG): container finished" podID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerID="b9f10e2afe51a3b5ed3de279d9f89815b29b05eb672bbd7405939a8324f43233" exitCode=0 Oct 14 13:49:03 crc kubenswrapper[4725]: I1014 13:49:03.003174 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" event={"ID":"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c","Type":"ContainerDied","Data":"b9f10e2afe51a3b5ed3de279d9f89815b29b05eb672bbd7405939a8324f43233"} Oct 14 13:49:03 crc kubenswrapper[4725]: I1014 13:49:03.003224 4725 scope.go:117] "RemoveContainer" containerID="6f767896ac68228fe09840fae7ddd8f8c1d269f215009c642a1baacf3a7849d4" Oct 14 13:49:03 crc kubenswrapper[4725]: I1014 13:49:03.327587 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p72hd" Oct 14 13:49:03 crc kubenswrapper[4725]: I1014 13:49:03.351128 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tdfmh" podStartSLOduration=3.772893878 podStartE2EDuration="6.351110351s" podCreationTimestamp="2025-10-14 13:48:57 +0000 UTC" firstStartedPulling="2025-10-14 13:48:58.953575286 +0000 UTC m=+2055.802010095" lastFinishedPulling="2025-10-14 13:49:01.531791719 +0000 UTC m=+2058.380226568" observedRunningTime="2025-10-14 13:49:02.015851461 +0000 UTC m=+2058.864286290" watchObservedRunningTime="2025-10-14 13:49:03.351110351 +0000 UTC m=+2060.199545160" Oct 14 13:49:03 crc kubenswrapper[4725]: I1014 13:49:03.421911 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faedb76b-4c8e-4eff-ad64-cd507a1447b9-utilities\") pod \"faedb76b-4c8e-4eff-ad64-cd507a1447b9\" (UID: \"faedb76b-4c8e-4eff-ad64-cd507a1447b9\") " Oct 14 13:49:03 crc kubenswrapper[4725]: I1014 13:49:03.422083 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlrnb\" (UniqueName: \"kubernetes.io/projected/faedb76b-4c8e-4eff-ad64-cd507a1447b9-kube-api-access-nlrnb\") pod \"faedb76b-4c8e-4eff-ad64-cd507a1447b9\" (UID: \"faedb76b-4c8e-4eff-ad64-cd507a1447b9\") " Oct 14 13:49:03 crc kubenswrapper[4725]: I1014 13:49:03.422127 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faedb76b-4c8e-4eff-ad64-cd507a1447b9-catalog-content\") pod \"faedb76b-4c8e-4eff-ad64-cd507a1447b9\" (UID: \"faedb76b-4c8e-4eff-ad64-cd507a1447b9\") " Oct 14 13:49:03 crc kubenswrapper[4725]: I1014 13:49:03.423527 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faedb76b-4c8e-4eff-ad64-cd507a1447b9-utilities" (OuterVolumeSpecName: "utilities") pod "faedb76b-4c8e-4eff-ad64-cd507a1447b9" (UID: "faedb76b-4c8e-4eff-ad64-cd507a1447b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:49:03 crc kubenswrapper[4725]: I1014 13:49:03.428682 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faedb76b-4c8e-4eff-ad64-cd507a1447b9-kube-api-access-nlrnb" (OuterVolumeSpecName: "kube-api-access-nlrnb") pod "faedb76b-4c8e-4eff-ad64-cd507a1447b9" (UID: "faedb76b-4c8e-4eff-ad64-cd507a1447b9"). InnerVolumeSpecName "kube-api-access-nlrnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:49:03 crc kubenswrapper[4725]: I1014 13:49:03.434795 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faedb76b-4c8e-4eff-ad64-cd507a1447b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "faedb76b-4c8e-4eff-ad64-cd507a1447b9" (UID: "faedb76b-4c8e-4eff-ad64-cd507a1447b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:49:03 crc kubenswrapper[4725]: I1014 13:49:03.524074 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlrnb\" (UniqueName: \"kubernetes.io/projected/faedb76b-4c8e-4eff-ad64-cd507a1447b9-kube-api-access-nlrnb\") on node \"crc\" DevicePath \"\"" Oct 14 13:49:03 crc kubenswrapper[4725]: I1014 13:49:03.524112 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faedb76b-4c8e-4eff-ad64-cd507a1447b9-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 13:49:03 crc kubenswrapper[4725]: I1014 13:49:03.524121 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faedb76b-4c8e-4eff-ad64-cd507a1447b9-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 13:49:04 crc kubenswrapper[4725]: I1014 13:49:04.014631 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p72hd" event={"ID":"faedb76b-4c8e-4eff-ad64-cd507a1447b9","Type":"ContainerDied","Data":"107820db9a1eebdea5351325ca4bdcd828bcc920020e3e3d7ed3efd00401d315"} Oct 14 13:49:04 crc kubenswrapper[4725]: I1014 13:49:04.015426 4725 scope.go:117] "RemoveContainer" containerID="17f9290f98f923c95eb3ca5cb2db9662dbc112e585a961bdaa4113dcb3d644c6" Oct 14 13:49:04 crc kubenswrapper[4725]: I1014 13:49:04.014663 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p72hd" Oct 14 13:49:04 crc kubenswrapper[4725]: I1014 13:49:04.018959 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" event={"ID":"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c","Type":"ContainerStarted","Data":"970512403f3a724310c7fbfce185ff25273c496de6bf6e9d555198ed6e64e7dd"} Oct 14 13:49:04 crc kubenswrapper[4725]: I1014 13:49:04.055692 4725 scope.go:117] "RemoveContainer" containerID="101659ac1ec495dbacc4b16f994ae41124e506b032358501f19e5ff3066174ff" Oct 14 13:49:04 crc kubenswrapper[4725]: I1014 13:49:04.078094 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p72hd"] Oct 14 13:49:04 crc kubenswrapper[4725]: I1014 13:49:04.079089 4725 scope.go:117] "RemoveContainer" containerID="2634fd5b77455d5f81e6c0bdcfc2610bb46b0de0f1ef7de87a57c31c28622cf7" Oct 14 13:49:04 crc kubenswrapper[4725]: I1014 13:49:04.088090 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p72hd"] Oct 14 13:49:05 crc kubenswrapper[4725]: I1014 13:49:05.939070 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faedb76b-4c8e-4eff-ad64-cd507a1447b9" path="/var/lib/kubelet/pods/faedb76b-4c8e-4eff-ad64-cd507a1447b9/volumes" Oct 14 13:49:07 crc kubenswrapper[4725]: I1014 13:49:07.532649 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tdfmh" Oct 14 13:49:07 crc kubenswrapper[4725]: I1014 13:49:07.533041 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tdfmh" Oct 14 13:49:07 crc kubenswrapper[4725]: I1014 13:49:07.597430 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tdfmh" Oct 14 13:49:08 crc kubenswrapper[4725]: I1014 13:49:08.107170 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tdfmh" Oct 14 13:49:08 crc kubenswrapper[4725]: I1014 13:49:08.970447 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tdfmh"] Oct 14 13:49:10 crc kubenswrapper[4725]: I1014 13:49:10.067303 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tdfmh" podUID="8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2" containerName="registry-server" containerID="cri-o://ffc56b0523b63b6d429be01f396c64551afcd172fe4d0a484c1796b76e1d522b" gracePeriod=2 Oct 14 13:49:10 crc kubenswrapper[4725]: I1014 13:49:10.491922 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tdfmh" Oct 14 13:49:10 crc kubenswrapper[4725]: I1014 13:49:10.560691 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2-utilities\") pod \"8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2\" (UID: \"8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2\") " Oct 14 13:49:10 crc kubenswrapper[4725]: I1014 13:49:10.560810 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2-catalog-content\") pod \"8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2\" (UID: \"8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2\") " Oct 14 13:49:10 crc kubenswrapper[4725]: I1014 13:49:10.560972 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpgd9\" (UniqueName: \"kubernetes.io/projected/8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2-kube-api-access-qpgd9\") pod \"8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2\" (UID: \"8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2\") " Oct 14 13:49:10 crc kubenswrapper[4725]: I1014 13:49:10.561817 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2-utilities" (OuterVolumeSpecName: "utilities") pod "8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2" (UID: "8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:49:10 crc kubenswrapper[4725]: I1014 13:49:10.567495 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2-kube-api-access-qpgd9" (OuterVolumeSpecName: "kube-api-access-qpgd9") pod "8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2" (UID: "8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2"). InnerVolumeSpecName "kube-api-access-qpgd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:49:10 crc kubenswrapper[4725]: I1014 13:49:10.643403 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2" (UID: "8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:49:10 crc kubenswrapper[4725]: I1014 13:49:10.662318 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpgd9\" (UniqueName: \"kubernetes.io/projected/8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2-kube-api-access-qpgd9\") on node \"crc\" DevicePath \"\"" Oct 14 13:49:10 crc kubenswrapper[4725]: I1014 13:49:10.662532 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 13:49:10 crc kubenswrapper[4725]: I1014 13:49:10.662543 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 13:49:11 crc kubenswrapper[4725]: I1014 13:49:11.082022 4725 generic.go:334] "Generic (PLEG): container finished" podID="8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2" containerID="ffc56b0523b63b6d429be01f396c64551afcd172fe4d0a484c1796b76e1d522b" exitCode=0 Oct 14 13:49:11 crc kubenswrapper[4725]: I1014 13:49:11.082094 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tdfmh" event={"ID":"8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2","Type":"ContainerDied","Data":"ffc56b0523b63b6d429be01f396c64551afcd172fe4d0a484c1796b76e1d522b"} Oct 14 13:49:11 crc kubenswrapper[4725]: I1014 13:49:11.082141 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tdfmh" Oct 14 13:49:11 crc kubenswrapper[4725]: I1014 13:49:11.082160 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tdfmh" event={"ID":"8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2","Type":"ContainerDied","Data":"3e198e388dbe3e7553333d60cc2ce7b7aca58feec3046f373a17c0ebb156f0e0"} Oct 14 13:49:11 crc kubenswrapper[4725]: I1014 13:49:11.082197 4725 scope.go:117] "RemoveContainer" containerID="ffc56b0523b63b6d429be01f396c64551afcd172fe4d0a484c1796b76e1d522b" Oct 14 13:49:11 crc kubenswrapper[4725]: I1014 13:49:11.109209 4725 scope.go:117] "RemoveContainer" containerID="89a8954342ecb80ed5567029d81892bea26052da3e3e1d6dadd17067e61c787e" Oct 14 13:49:11 crc kubenswrapper[4725]: I1014 13:49:11.136556 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tdfmh"] Oct 14 13:49:11 crc kubenswrapper[4725]: I1014 13:49:11.149312 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tdfmh"] Oct 14 13:49:11 crc kubenswrapper[4725]: I1014 13:49:11.162697 4725 scope.go:117] "RemoveContainer" containerID="dd88467bb3240ba225fd1f4686df1ea8bd53a0737fb21ba82f66f4baf175df63" Oct 14 13:49:11 crc kubenswrapper[4725]: I1014 13:49:11.190769 4725 scope.go:117] "RemoveContainer" containerID="ffc56b0523b63b6d429be01f396c64551afcd172fe4d0a484c1796b76e1d522b" Oct 14 13:49:11 crc kubenswrapper[4725]: E1014 13:49:11.191432 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffc56b0523b63b6d429be01f396c64551afcd172fe4d0a484c1796b76e1d522b\": container with ID starting with ffc56b0523b63b6d429be01f396c64551afcd172fe4d0a484c1796b76e1d522b not found: ID does not exist" containerID="ffc56b0523b63b6d429be01f396c64551afcd172fe4d0a484c1796b76e1d522b" Oct 14 13:49:11 crc kubenswrapper[4725]: I1014 13:49:11.191503 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffc56b0523b63b6d429be01f396c64551afcd172fe4d0a484c1796b76e1d522b"} err="failed to get container status \"ffc56b0523b63b6d429be01f396c64551afcd172fe4d0a484c1796b76e1d522b\": rpc error: code = NotFound desc = could not find container \"ffc56b0523b63b6d429be01f396c64551afcd172fe4d0a484c1796b76e1d522b\": container with ID starting with ffc56b0523b63b6d429be01f396c64551afcd172fe4d0a484c1796b76e1d522b not found: ID does not exist" Oct 14 13:49:11 crc kubenswrapper[4725]: I1014 13:49:11.191534 4725 scope.go:117] "RemoveContainer" containerID="89a8954342ecb80ed5567029d81892bea26052da3e3e1d6dadd17067e61c787e" Oct 14 13:49:11 crc kubenswrapper[4725]: E1014 13:49:11.192048 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89a8954342ecb80ed5567029d81892bea26052da3e3e1d6dadd17067e61c787e\": container with ID starting with 89a8954342ecb80ed5567029d81892bea26052da3e3e1d6dadd17067e61c787e not found: ID does not exist" containerID="89a8954342ecb80ed5567029d81892bea26052da3e3e1d6dadd17067e61c787e" Oct 14 13:49:11 crc kubenswrapper[4725]: I1014 13:49:11.192080 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89a8954342ecb80ed5567029d81892bea26052da3e3e1d6dadd17067e61c787e"} err="failed to get container status \"89a8954342ecb80ed5567029d81892bea26052da3e3e1d6dadd17067e61c787e\": rpc error: code = NotFound desc = could not find container \"89a8954342ecb80ed5567029d81892bea26052da3e3e1d6dadd17067e61c787e\": container with ID starting with 89a8954342ecb80ed5567029d81892bea26052da3e3e1d6dadd17067e61c787e not found: ID does not exist" Oct 14 13:49:11 crc kubenswrapper[4725]: I1014 13:49:11.192106 4725 scope.go:117] "RemoveContainer" containerID="dd88467bb3240ba225fd1f4686df1ea8bd53a0737fb21ba82f66f4baf175df63" Oct 14 13:49:11 crc kubenswrapper[4725]: E1014 13:49:11.192407 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd88467bb3240ba225fd1f4686df1ea8bd53a0737fb21ba82f66f4baf175df63\": container with ID starting with dd88467bb3240ba225fd1f4686df1ea8bd53a0737fb21ba82f66f4baf175df63 not found: ID does not exist" containerID="dd88467bb3240ba225fd1f4686df1ea8bd53a0737fb21ba82f66f4baf175df63" Oct 14 13:49:11 crc kubenswrapper[4725]: I1014 13:49:11.192433 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd88467bb3240ba225fd1f4686df1ea8bd53a0737fb21ba82f66f4baf175df63"} err="failed to get container status \"dd88467bb3240ba225fd1f4686df1ea8bd53a0737fb21ba82f66f4baf175df63\": rpc error: code = NotFound desc = could not find container \"dd88467bb3240ba225fd1f4686df1ea8bd53a0737fb21ba82f66f4baf175df63\": container with ID starting with dd88467bb3240ba225fd1f4686df1ea8bd53a0737fb21ba82f66f4baf175df63 not found: ID does not exist" Oct 14 13:49:11 crc kubenswrapper[4725]: I1014 13:49:11.930761 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2" path="/var/lib/kubelet/pods/8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2/volumes" Oct 14 13:49:35 crc kubenswrapper[4725]: I1014 13:49:35.312730 4725 generic.go:334] "Generic (PLEG): container finished" podID="017a8a26-0804-4e78-972b-e74224e16f72" containerID="f150685d3a0a649e9aa9e808985faa8b158383b3c59da572fd601731f4b5dab5" exitCode=0 Oct 14 13:49:35 crc kubenswrapper[4725]: I1014 13:49:35.312838 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74" event={"ID":"017a8a26-0804-4e78-972b-e74224e16f72","Type":"ContainerDied","Data":"f150685d3a0a649e9aa9e808985faa8b158383b3c59da572fd601731f4b5dab5"} Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.676826 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c46tj"] Oct 14 13:49:36 crc kubenswrapper[4725]: E1014 13:49:36.677591 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faedb76b-4c8e-4eff-ad64-cd507a1447b9" containerName="extract-utilities" Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.677610 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="faedb76b-4c8e-4eff-ad64-cd507a1447b9" containerName="extract-utilities" Oct 14 13:49:36 crc kubenswrapper[4725]: E1014 13:49:36.677625 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faedb76b-4c8e-4eff-ad64-cd507a1447b9" containerName="registry-server" Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.677635 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="faedb76b-4c8e-4eff-ad64-cd507a1447b9" containerName="registry-server" Oct 14 13:49:36 crc kubenswrapper[4725]: E1014 13:49:36.677651 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2" containerName="extract-utilities" Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.677662 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2" containerName="extract-utilities" Oct 14 13:49:36 crc kubenswrapper[4725]: E1014 13:49:36.677683 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faedb76b-4c8e-4eff-ad64-cd507a1447b9" containerName="extract-content" Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.677694 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="faedb76b-4c8e-4eff-ad64-cd507a1447b9" containerName="extract-content" Oct 14 13:49:36 crc kubenswrapper[4725]: E1014 13:49:36.677723 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2" containerName="extract-content" Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.677731 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2" containerName="extract-content" Oct 14 13:49:36 crc kubenswrapper[4725]: E1014 13:49:36.677744 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2" containerName="registry-server" Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.677753 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2" containerName="registry-server" Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.678039 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="8618f7d1-fe03-4f1c-b4b8-3bdde7c2cbe2" containerName="registry-server" Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.678074 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="faedb76b-4c8e-4eff-ad64-cd507a1447b9" containerName="registry-server" Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.709226 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c46tj"] Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.709397 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c46tj" Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.742582 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74" Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.889741 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/017a8a26-0804-4e78-972b-e74224e16f72-neutron-metadata-combined-ca-bundle\") pod \"017a8a26-0804-4e78-972b-e74224e16f72\" (UID: \"017a8a26-0804-4e78-972b-e74224e16f72\") " Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.889976 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drzkl\" (UniqueName: \"kubernetes.io/projected/017a8a26-0804-4e78-972b-e74224e16f72-kube-api-access-drzkl\") pod \"017a8a26-0804-4e78-972b-e74224e16f72\" (UID: \"017a8a26-0804-4e78-972b-e74224e16f72\") " Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.890036 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/017a8a26-0804-4e78-972b-e74224e16f72-neutron-ovn-metadata-agent-neutron-config-0\") pod \"017a8a26-0804-4e78-972b-e74224e16f72\" (UID: \"017a8a26-0804-4e78-972b-e74224e16f72\") " Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.890136 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/017a8a26-0804-4e78-972b-e74224e16f72-inventory\") pod \"017a8a26-0804-4e78-972b-e74224e16f72\" (UID: \"017a8a26-0804-4e78-972b-e74224e16f72\") " Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.890188 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/017a8a26-0804-4e78-972b-e74224e16f72-nova-metadata-neutron-config-0\") pod \"017a8a26-0804-4e78-972b-e74224e16f72\" (UID: \"017a8a26-0804-4e78-972b-e74224e16f72\") " Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.890212 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/017a8a26-0804-4e78-972b-e74224e16f72-ssh-key\") pod \"017a8a26-0804-4e78-972b-e74224e16f72\" (UID: \"017a8a26-0804-4e78-972b-e74224e16f72\") " Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.890619 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2jq4\" (UniqueName: \"kubernetes.io/projected/ded37bba-e75e-4232-9feb-5591b7fadbb9-kube-api-access-d2jq4\") pod \"community-operators-c46tj\" (UID: \"ded37bba-e75e-4232-9feb-5591b7fadbb9\") " pod="openshift-marketplace/community-operators-c46tj" Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.890689 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ded37bba-e75e-4232-9feb-5591b7fadbb9-utilities\") pod \"community-operators-c46tj\" (UID: \"ded37bba-e75e-4232-9feb-5591b7fadbb9\") " pod="openshift-marketplace/community-operators-c46tj" Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.890768 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ded37bba-e75e-4232-9feb-5591b7fadbb9-catalog-content\") pod \"community-operators-c46tj\" (UID: \"ded37bba-e75e-4232-9feb-5591b7fadbb9\") " pod="openshift-marketplace/community-operators-c46tj" Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.897545 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/017a8a26-0804-4e78-972b-e74224e16f72-kube-api-access-drzkl" (OuterVolumeSpecName: "kube-api-access-drzkl") pod "017a8a26-0804-4e78-972b-e74224e16f72" (UID: "017a8a26-0804-4e78-972b-e74224e16f72"). InnerVolumeSpecName "kube-api-access-drzkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.924728 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/017a8a26-0804-4e78-972b-e74224e16f72-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "017a8a26-0804-4e78-972b-e74224e16f72" (UID: "017a8a26-0804-4e78-972b-e74224e16f72"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.930137 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/017a8a26-0804-4e78-972b-e74224e16f72-inventory" (OuterVolumeSpecName: "inventory") pod "017a8a26-0804-4e78-972b-e74224e16f72" (UID: "017a8a26-0804-4e78-972b-e74224e16f72"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.932912 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/017a8a26-0804-4e78-972b-e74224e16f72-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "017a8a26-0804-4e78-972b-e74224e16f72" (UID: "017a8a26-0804-4e78-972b-e74224e16f72"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.939398 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/017a8a26-0804-4e78-972b-e74224e16f72-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "017a8a26-0804-4e78-972b-e74224e16f72" (UID: "017a8a26-0804-4e78-972b-e74224e16f72"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.940249 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/017a8a26-0804-4e78-972b-e74224e16f72-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "017a8a26-0804-4e78-972b-e74224e16f72" (UID: "017a8a26-0804-4e78-972b-e74224e16f72"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.992628 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2jq4\" (UniqueName: \"kubernetes.io/projected/ded37bba-e75e-4232-9feb-5591b7fadbb9-kube-api-access-d2jq4\") pod \"community-operators-c46tj\" (UID: \"ded37bba-e75e-4232-9feb-5591b7fadbb9\") " pod="openshift-marketplace/community-operators-c46tj" Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.992715 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ded37bba-e75e-4232-9feb-5591b7fadbb9-utilities\") pod \"community-operators-c46tj\" (UID: \"ded37bba-e75e-4232-9feb-5591b7fadbb9\") " pod="openshift-marketplace/community-operators-c46tj" Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.992799 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ded37bba-e75e-4232-9feb-5591b7fadbb9-catalog-content\") pod \"community-operators-c46tj\" (UID: \"ded37bba-e75e-4232-9feb-5591b7fadbb9\") " pod="openshift-marketplace/community-operators-c46tj" Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.992894 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drzkl\" (UniqueName: \"kubernetes.io/projected/017a8a26-0804-4e78-972b-e74224e16f72-kube-api-access-drzkl\") on node \"crc\" DevicePath \"\"" Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.992922 4725 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/017a8a26-0804-4e78-972b-e74224e16f72-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.992940 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/017a8a26-0804-4e78-972b-e74224e16f72-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.992954 4725 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/017a8a26-0804-4e78-972b-e74224e16f72-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.992967 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/017a8a26-0804-4e78-972b-e74224e16f72-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.992979 4725 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/017a8a26-0804-4e78-972b-e74224e16f72-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.993342 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ded37bba-e75e-4232-9feb-5591b7fadbb9-utilities\") pod \"community-operators-c46tj\" (UID: \"ded37bba-e75e-4232-9feb-5591b7fadbb9\") " pod="openshift-marketplace/community-operators-c46tj" Oct 14 13:49:36 crc kubenswrapper[4725]: I1014 13:49:36.993355 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ded37bba-e75e-4232-9feb-5591b7fadbb9-catalog-content\") pod \"community-operators-c46tj\" (UID: \"ded37bba-e75e-4232-9feb-5591b7fadbb9\") " pod="openshift-marketplace/community-operators-c46tj" Oct 14 13:49:37 crc kubenswrapper[4725]: I1014 13:49:37.013241 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2jq4\" (UniqueName: \"kubernetes.io/projected/ded37bba-e75e-4232-9feb-5591b7fadbb9-kube-api-access-d2jq4\") pod \"community-operators-c46tj\" (UID: \"ded37bba-e75e-4232-9feb-5591b7fadbb9\") " pod="openshift-marketplace/community-operators-c46tj" Oct 14 13:49:37 crc kubenswrapper[4725]: I1014 13:49:37.052794 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c46tj" Oct 14 13:49:37 crc kubenswrapper[4725]: I1014 13:49:37.402876 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74" event={"ID":"017a8a26-0804-4e78-972b-e74224e16f72","Type":"ContainerDied","Data":"9b015e21d9276338eb2e208b3964f0a75ba7e79fec644b2111ed2fc9783e4a65"} Oct 14 13:49:37 crc kubenswrapper[4725]: I1014 13:49:37.402922 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b015e21d9276338eb2e208b3964f0a75ba7e79fec644b2111ed2fc9783e4a65" Oct 14 13:49:37 crc kubenswrapper[4725]: I1014 13:49:37.402989 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74" Oct 14 13:49:37 crc kubenswrapper[4725]: I1014 13:49:37.485622 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz"] Oct 14 13:49:37 crc kubenswrapper[4725]: E1014 13:49:37.486264 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="017a8a26-0804-4e78-972b-e74224e16f72" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 14 13:49:37 crc kubenswrapper[4725]: I1014 13:49:37.486290 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="017a8a26-0804-4e78-972b-e74224e16f72" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 14 13:49:37 crc kubenswrapper[4725]: I1014 13:49:37.486540 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="017a8a26-0804-4e78-972b-e74224e16f72" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 14 13:49:37 crc kubenswrapper[4725]: I1014 13:49:37.487542 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz" Oct 14 13:49:37 crc kubenswrapper[4725]: I1014 13:49:37.493828 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz"] Oct 14 13:49:37 crc kubenswrapper[4725]: I1014 13:49:37.494331 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qnvbv" Oct 14 13:49:37 crc kubenswrapper[4725]: I1014 13:49:37.494527 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 13:49:37 crc kubenswrapper[4725]: I1014 13:49:37.494574 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 13:49:37 crc kubenswrapper[4725]: I1014 13:49:37.495435 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 13:49:37 crc kubenswrapper[4725]: I1014 13:49:37.495566 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Oct 14 13:49:37 crc kubenswrapper[4725]: I1014 13:49:37.605638 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6846c1d5-d8cd-40d7-a7b9-080f8f722248-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz\" (UID: \"6846c1d5-d8cd-40d7-a7b9-080f8f722248\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz" Oct 14 13:49:37 crc kubenswrapper[4725]: I1014 13:49:37.606043 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/6846c1d5-d8cd-40d7-a7b9-080f8f722248-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz\" (UID: \"6846c1d5-d8cd-40d7-a7b9-080f8f722248\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz" Oct 14 13:49:37 crc kubenswrapper[4725]: I1014 13:49:37.606167 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx5sm\" (UniqueName: \"kubernetes.io/projected/6846c1d5-d8cd-40d7-a7b9-080f8f722248-kube-api-access-kx5sm\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz\" (UID: \"6846c1d5-d8cd-40d7-a7b9-080f8f722248\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz" Oct 14 13:49:37 crc kubenswrapper[4725]: I1014 13:49:37.606517 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6846c1d5-d8cd-40d7-a7b9-080f8f722248-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz\" (UID: \"6846c1d5-d8cd-40d7-a7b9-080f8f722248\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz" Oct 14 13:49:37 crc kubenswrapper[4725]: I1014 13:49:37.606660 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6846c1d5-d8cd-40d7-a7b9-080f8f722248-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz\" (UID: \"6846c1d5-d8cd-40d7-a7b9-080f8f722248\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz" Oct 14 13:49:37 crc kubenswrapper[4725]: I1014 13:49:37.691164 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c46tj"] Oct 14 13:49:37 crc kubenswrapper[4725]: I1014 13:49:37.708527 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6846c1d5-d8cd-40d7-a7b9-080f8f722248-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz\" (UID: \"6846c1d5-d8cd-40d7-a7b9-080f8f722248\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz" Oct 14 13:49:37 crc kubenswrapper[4725]: I1014 13:49:37.708958 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/6846c1d5-d8cd-40d7-a7b9-080f8f722248-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz\" (UID: \"6846c1d5-d8cd-40d7-a7b9-080f8f722248\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz" Oct 14 13:49:37 crc kubenswrapper[4725]: I1014 13:49:37.709207 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx5sm\" (UniqueName: \"kubernetes.io/projected/6846c1d5-d8cd-40d7-a7b9-080f8f722248-kube-api-access-kx5sm\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz\" (UID: \"6846c1d5-d8cd-40d7-a7b9-080f8f722248\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz" Oct 14 13:49:37 crc kubenswrapper[4725]: I1014 13:49:37.709559 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6846c1d5-d8cd-40d7-a7b9-080f8f722248-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz\" (UID: \"6846c1d5-d8cd-40d7-a7b9-080f8f722248\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz" Oct 14 13:49:37 crc kubenswrapper[4725]: I1014 13:49:37.709772 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6846c1d5-d8cd-40d7-a7b9-080f8f722248-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz\" (UID: \"6846c1d5-d8cd-40d7-a7b9-080f8f722248\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz" Oct 14 13:49:37 crc kubenswrapper[4725]: I1014 13:49:37.714468 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6846c1d5-d8cd-40d7-a7b9-080f8f722248-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz\" (UID: \"6846c1d5-d8cd-40d7-a7b9-080f8f722248\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz" Oct 14 13:49:37 crc kubenswrapper[4725]: I1014 13:49:37.714513 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/6846c1d5-d8cd-40d7-a7b9-080f8f722248-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz\" (UID: \"6846c1d5-d8cd-40d7-a7b9-080f8f722248\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz" Oct 14 13:49:37 crc kubenswrapper[4725]: I1014 13:49:37.715116 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6846c1d5-d8cd-40d7-a7b9-080f8f722248-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz\" (UID: \"6846c1d5-d8cd-40d7-a7b9-080f8f722248\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz" Oct 14 13:49:37 crc kubenswrapper[4725]: I1014 13:49:37.715750 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6846c1d5-d8cd-40d7-a7b9-080f8f722248-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz\" (UID: \"6846c1d5-d8cd-40d7-a7b9-080f8f722248\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz" Oct 14 13:49:37 crc kubenswrapper[4725]: I1014 13:49:37.732992 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx5sm\" (UniqueName: \"kubernetes.io/projected/6846c1d5-d8cd-40d7-a7b9-080f8f722248-kube-api-access-kx5sm\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz\" (UID: \"6846c1d5-d8cd-40d7-a7b9-080f8f722248\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz" Oct 14 13:49:37 crc kubenswrapper[4725]: I1014 13:49:37.817031 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz" Oct 14 13:49:38 crc kubenswrapper[4725]: I1014 13:49:38.383148 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz"] Oct 14 13:49:38 crc kubenswrapper[4725]: I1014 13:49:38.414690 4725 generic.go:334] "Generic (PLEG): container finished" podID="ded37bba-e75e-4232-9feb-5591b7fadbb9" containerID="4cc6bfb181e9c6e644ad8f38521460d441105904646451bc77c042249d62a878" exitCode=0 Oct 14 13:49:38 crc kubenswrapper[4725]: I1014 13:49:38.414774 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c46tj" event={"ID":"ded37bba-e75e-4232-9feb-5591b7fadbb9","Type":"ContainerDied","Data":"4cc6bfb181e9c6e644ad8f38521460d441105904646451bc77c042249d62a878"} Oct 14 13:49:38 crc kubenswrapper[4725]: I1014 13:49:38.414807 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c46tj" event={"ID":"ded37bba-e75e-4232-9feb-5591b7fadbb9","Type":"ContainerStarted","Data":"778f008acec3eb29ee7c34f9413720371dc5670280c878baab8d90882b082cd8"} Oct 14 13:49:38 crc kubenswrapper[4725]: I1014 13:49:38.416543 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz" event={"ID":"6846c1d5-d8cd-40d7-a7b9-080f8f722248","Type":"ContainerStarted","Data":"1edce905d02f232f29ae83b5ec2597dc6702e0929bb1f61fed736902d48020c8"} Oct 14 13:49:39 crc kubenswrapper[4725]: I1014 13:49:39.425119 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz" event={"ID":"6846c1d5-d8cd-40d7-a7b9-080f8f722248","Type":"ContainerStarted","Data":"fe2ca4fb411cd25a1b2b6e1c1e843668cd509e7431cfeb311196286a58e7a2c0"} Oct 14 13:49:39 crc kubenswrapper[4725]: I1014 13:49:39.447718 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz" podStartSLOduration=2.043292391 podStartE2EDuration="2.447702805s" podCreationTimestamp="2025-10-14 13:49:37 +0000 UTC" firstStartedPulling="2025-10-14 13:49:38.392061192 +0000 UTC m=+2095.240496011" lastFinishedPulling="2025-10-14 13:49:38.796471616 +0000 UTC m=+2095.644906425" observedRunningTime="2025-10-14 13:49:39.442667479 +0000 UTC m=+2096.291102288" watchObservedRunningTime="2025-10-14 13:49:39.447702805 +0000 UTC m=+2096.296137614" Oct 14 13:49:40 crc kubenswrapper[4725]: I1014 13:49:40.437264 4725 generic.go:334] "Generic (PLEG): container finished" podID="ded37bba-e75e-4232-9feb-5591b7fadbb9" containerID="e2ede1dd207681f50806a9b850132ea2c982273391909ad95e6c15edd9590903" exitCode=0 Oct 14 13:49:40 crc kubenswrapper[4725]: I1014 13:49:40.437312 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c46tj" event={"ID":"ded37bba-e75e-4232-9feb-5591b7fadbb9","Type":"ContainerDied","Data":"e2ede1dd207681f50806a9b850132ea2c982273391909ad95e6c15edd9590903"} Oct 14 13:49:41 crc kubenswrapper[4725]: I1014 13:49:41.447929 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c46tj" event={"ID":"ded37bba-e75e-4232-9feb-5591b7fadbb9","Type":"ContainerStarted","Data":"5f5a4656e5eb2eab64f15b507ef967820c853e30fcb0ef12c07cfc5a5a1f68b1"} Oct 14 13:49:41 crc kubenswrapper[4725]: I1014 13:49:41.477497 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c46tj" podStartSLOduration=2.9994267580000002 podStartE2EDuration="5.47747507s" podCreationTimestamp="2025-10-14 13:49:36 +0000 UTC" firstStartedPulling="2025-10-14 13:49:38.416648516 +0000 UTC m=+2095.265083325" lastFinishedPulling="2025-10-14 13:49:40.894696828 +0000 UTC m=+2097.743131637" observedRunningTime="2025-10-14 13:49:41.470122313 +0000 UTC m=+2098.318557132" watchObservedRunningTime="2025-10-14 13:49:41.47747507 +0000 UTC m=+2098.325909879" Oct 14 13:49:47 crc kubenswrapper[4725]: I1014 13:49:47.053334 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c46tj" Oct 14 13:49:47 crc kubenswrapper[4725]: I1014 13:49:47.053788 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c46tj" Oct 14 13:49:47 crc kubenswrapper[4725]: I1014 13:49:47.124995 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c46tj" Oct 14 13:49:47 crc kubenswrapper[4725]: I1014 13:49:47.596406 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c46tj" Oct 14 13:49:47 crc kubenswrapper[4725]: I1014 13:49:47.655920 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c46tj"] Oct 14 13:49:49 crc kubenswrapper[4725]: I1014 13:49:49.529533 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c46tj" podUID="ded37bba-e75e-4232-9feb-5591b7fadbb9" containerName="registry-server" containerID="cri-o://5f5a4656e5eb2eab64f15b507ef967820c853e30fcb0ef12c07cfc5a5a1f68b1" gracePeriod=2 Oct 14 13:49:49 crc kubenswrapper[4725]: I1014 13:49:49.981681 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c46tj" Oct 14 13:49:50 crc kubenswrapper[4725]: I1014 13:49:50.090170 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ded37bba-e75e-4232-9feb-5591b7fadbb9-catalog-content\") pod \"ded37bba-e75e-4232-9feb-5591b7fadbb9\" (UID: \"ded37bba-e75e-4232-9feb-5591b7fadbb9\") " Oct 14 13:49:50 crc kubenswrapper[4725]: I1014 13:49:50.090584 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2jq4\" (UniqueName: \"kubernetes.io/projected/ded37bba-e75e-4232-9feb-5591b7fadbb9-kube-api-access-d2jq4\") pod \"ded37bba-e75e-4232-9feb-5591b7fadbb9\" (UID: \"ded37bba-e75e-4232-9feb-5591b7fadbb9\") " Oct 14 13:49:50 crc kubenswrapper[4725]: I1014 13:49:50.090622 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ded37bba-e75e-4232-9feb-5591b7fadbb9-utilities\") pod \"ded37bba-e75e-4232-9feb-5591b7fadbb9\" (UID: \"ded37bba-e75e-4232-9feb-5591b7fadbb9\") " Oct 14 13:49:50 crc kubenswrapper[4725]: I1014 13:49:50.091301 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ded37bba-e75e-4232-9feb-5591b7fadbb9-utilities" (OuterVolumeSpecName: "utilities") pod "ded37bba-e75e-4232-9feb-5591b7fadbb9" (UID: "ded37bba-e75e-4232-9feb-5591b7fadbb9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:49:50 crc kubenswrapper[4725]: I1014 13:49:50.096889 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ded37bba-e75e-4232-9feb-5591b7fadbb9-kube-api-access-d2jq4" (OuterVolumeSpecName: "kube-api-access-d2jq4") pod "ded37bba-e75e-4232-9feb-5591b7fadbb9" (UID: "ded37bba-e75e-4232-9feb-5591b7fadbb9"). InnerVolumeSpecName "kube-api-access-d2jq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:49:50 crc kubenswrapper[4725]: I1014 13:49:50.140414 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ded37bba-e75e-4232-9feb-5591b7fadbb9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ded37bba-e75e-4232-9feb-5591b7fadbb9" (UID: "ded37bba-e75e-4232-9feb-5591b7fadbb9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:49:50 crc kubenswrapper[4725]: I1014 13:49:50.194665 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2jq4\" (UniqueName: \"kubernetes.io/projected/ded37bba-e75e-4232-9feb-5591b7fadbb9-kube-api-access-d2jq4\") on node \"crc\" DevicePath \"\"" Oct 14 13:49:50 crc kubenswrapper[4725]: I1014 13:49:50.194788 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ded37bba-e75e-4232-9feb-5591b7fadbb9-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 13:49:50 crc kubenswrapper[4725]: I1014 13:49:50.195049 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ded37bba-e75e-4232-9feb-5591b7fadbb9-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 13:49:50 crc kubenswrapper[4725]: I1014 13:49:50.540195 4725 generic.go:334] "Generic (PLEG): container finished" podID="ded37bba-e75e-4232-9feb-5591b7fadbb9" containerID="5f5a4656e5eb2eab64f15b507ef967820c853e30fcb0ef12c07cfc5a5a1f68b1" exitCode=0 Oct 14 13:49:50 crc kubenswrapper[4725]: I1014 13:49:50.540238 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c46tj" event={"ID":"ded37bba-e75e-4232-9feb-5591b7fadbb9","Type":"ContainerDied","Data":"5f5a4656e5eb2eab64f15b507ef967820c853e30fcb0ef12c07cfc5a5a1f68b1"} Oct 14 13:49:50 crc kubenswrapper[4725]: I1014 13:49:50.540262 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c46tj" event={"ID":"ded37bba-e75e-4232-9feb-5591b7fadbb9","Type":"ContainerDied","Data":"778f008acec3eb29ee7c34f9413720371dc5670280c878baab8d90882b082cd8"} Oct 14 13:49:50 crc kubenswrapper[4725]: I1014 13:49:50.540280 4725 scope.go:117] "RemoveContainer" containerID="5f5a4656e5eb2eab64f15b507ef967820c853e30fcb0ef12c07cfc5a5a1f68b1" Oct 14 13:49:50 crc kubenswrapper[4725]: I1014 13:49:50.540396 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c46tj" Oct 14 13:49:50 crc kubenswrapper[4725]: I1014 13:49:50.580290 4725 scope.go:117] "RemoveContainer" containerID="e2ede1dd207681f50806a9b850132ea2c982273391909ad95e6c15edd9590903" Oct 14 13:49:50 crc kubenswrapper[4725]: I1014 13:49:50.592599 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c46tj"] Oct 14 13:49:50 crc kubenswrapper[4725]: I1014 13:49:50.600803 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c46tj"] Oct 14 13:49:50 crc kubenswrapper[4725]: I1014 13:49:50.613110 4725 scope.go:117] "RemoveContainer" containerID="4cc6bfb181e9c6e644ad8f38521460d441105904646451bc77c042249d62a878" Oct 14 13:49:50 crc kubenswrapper[4725]: I1014 13:49:50.643928 4725 scope.go:117] "RemoveContainer" containerID="5f5a4656e5eb2eab64f15b507ef967820c853e30fcb0ef12c07cfc5a5a1f68b1" Oct 14 13:49:50 crc kubenswrapper[4725]: E1014 13:49:50.644497 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f5a4656e5eb2eab64f15b507ef967820c853e30fcb0ef12c07cfc5a5a1f68b1\": container with ID starting with 5f5a4656e5eb2eab64f15b507ef967820c853e30fcb0ef12c07cfc5a5a1f68b1 not found: ID does not exist" containerID="5f5a4656e5eb2eab64f15b507ef967820c853e30fcb0ef12c07cfc5a5a1f68b1" Oct 14 13:49:50 crc kubenswrapper[4725]: I1014 13:49:50.644548 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f5a4656e5eb2eab64f15b507ef967820c853e30fcb0ef12c07cfc5a5a1f68b1"} err="failed to get container status \"5f5a4656e5eb2eab64f15b507ef967820c853e30fcb0ef12c07cfc5a5a1f68b1\": rpc error: code = NotFound desc = could not find container \"5f5a4656e5eb2eab64f15b507ef967820c853e30fcb0ef12c07cfc5a5a1f68b1\": container with ID starting with 5f5a4656e5eb2eab64f15b507ef967820c853e30fcb0ef12c07cfc5a5a1f68b1 not found: ID does not exist" Oct 14 13:49:50 crc kubenswrapper[4725]: I1014 13:49:50.644578 4725 scope.go:117] "RemoveContainer" containerID="e2ede1dd207681f50806a9b850132ea2c982273391909ad95e6c15edd9590903" Oct 14 13:49:50 crc kubenswrapper[4725]: E1014 13:49:50.644986 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2ede1dd207681f50806a9b850132ea2c982273391909ad95e6c15edd9590903\": container with ID starting with e2ede1dd207681f50806a9b850132ea2c982273391909ad95e6c15edd9590903 not found: ID does not exist" containerID="e2ede1dd207681f50806a9b850132ea2c982273391909ad95e6c15edd9590903" Oct 14 13:49:50 crc kubenswrapper[4725]: I1014 13:49:50.645020 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2ede1dd207681f50806a9b850132ea2c982273391909ad95e6c15edd9590903"} err="failed to get container status \"e2ede1dd207681f50806a9b850132ea2c982273391909ad95e6c15edd9590903\": rpc error: code = NotFound desc = could not find container \"e2ede1dd207681f50806a9b850132ea2c982273391909ad95e6c15edd9590903\": container with ID starting with e2ede1dd207681f50806a9b850132ea2c982273391909ad95e6c15edd9590903 not found: ID does not exist" Oct 14 13:49:50 crc kubenswrapper[4725]: I1014 13:49:50.645043 4725 scope.go:117] "RemoveContainer" containerID="4cc6bfb181e9c6e644ad8f38521460d441105904646451bc77c042249d62a878" Oct 14 13:49:50 crc kubenswrapper[4725]: E1014 13:49:50.645291 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cc6bfb181e9c6e644ad8f38521460d441105904646451bc77c042249d62a878\": container with ID starting with 4cc6bfb181e9c6e644ad8f38521460d441105904646451bc77c042249d62a878 not found: ID does not exist" containerID="4cc6bfb181e9c6e644ad8f38521460d441105904646451bc77c042249d62a878" Oct 14 13:49:50 crc kubenswrapper[4725]: I1014 13:49:50.645309 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cc6bfb181e9c6e644ad8f38521460d441105904646451bc77c042249d62a878"} err="failed to get container status \"4cc6bfb181e9c6e644ad8f38521460d441105904646451bc77c042249d62a878\": rpc error: code = NotFound desc = could not find container \"4cc6bfb181e9c6e644ad8f38521460d441105904646451bc77c042249d62a878\": container with ID starting with 4cc6bfb181e9c6e644ad8f38521460d441105904646451bc77c042249d62a878 not found: ID does not exist" Oct 14 13:49:51 crc kubenswrapper[4725]: I1014 13:49:51.935395 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ded37bba-e75e-4232-9feb-5591b7fadbb9" path="/var/lib/kubelet/pods/ded37bba-e75e-4232-9feb-5591b7fadbb9/volumes" Oct 14 13:51:32 crc kubenswrapper[4725]: I1014 13:51:32.520556 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 13:51:32 crc kubenswrapper[4725]: I1014 13:51:32.521109 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 13:52:02 crc kubenswrapper[4725]: I1014 13:52:02.520664 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 13:52:02 crc kubenswrapper[4725]: I1014 13:52:02.521169 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 13:52:32 crc kubenswrapper[4725]: I1014 13:52:32.520370 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 13:52:32 crc kubenswrapper[4725]: I1014 13:52:32.521139 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 13:52:32 crc kubenswrapper[4725]: I1014 13:52:32.521185 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" Oct 14 13:52:32 crc kubenswrapper[4725]: I1014 13:52:32.522025 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"970512403f3a724310c7fbfce185ff25273c496de6bf6e9d555198ed6e64e7dd"} pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 13:52:32 crc kubenswrapper[4725]: I1014 13:52:32.522085 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" containerID="cri-o://970512403f3a724310c7fbfce185ff25273c496de6bf6e9d555198ed6e64e7dd" gracePeriod=600 Oct 14 13:52:32 crc kubenswrapper[4725]: E1014 13:52:32.666998 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:52:33 crc kubenswrapper[4725]: I1014 13:52:33.249752 4725 generic.go:334] "Generic (PLEG): container finished" podID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerID="970512403f3a724310c7fbfce185ff25273c496de6bf6e9d555198ed6e64e7dd" exitCode=0 Oct 14 13:52:33 crc kubenswrapper[4725]: I1014 13:52:33.249803 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" event={"ID":"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c","Type":"ContainerDied","Data":"970512403f3a724310c7fbfce185ff25273c496de6bf6e9d555198ed6e64e7dd"} Oct 14 13:52:33 crc kubenswrapper[4725]: I1014 13:52:33.249871 4725 scope.go:117] "RemoveContainer" containerID="b9f10e2afe51a3b5ed3de279d9f89815b29b05eb672bbd7405939a8324f43233" Oct 14 13:52:33 crc kubenswrapper[4725]: I1014 13:52:33.254538 4725 scope.go:117] "RemoveContainer" containerID="970512403f3a724310c7fbfce185ff25273c496de6bf6e9d555198ed6e64e7dd" Oct 14 13:52:33 crc kubenswrapper[4725]: E1014 13:52:33.255416 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:52:46 crc kubenswrapper[4725]: I1014 13:52:46.920882 4725 scope.go:117] "RemoveContainer" containerID="970512403f3a724310c7fbfce185ff25273c496de6bf6e9d555198ed6e64e7dd" Oct 14 13:52:46 crc kubenswrapper[4725]: E1014 13:52:46.921689 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:52:59 crc kubenswrapper[4725]: I1014 13:52:59.920925 4725 scope.go:117] "RemoveContainer" containerID="970512403f3a724310c7fbfce185ff25273c496de6bf6e9d555198ed6e64e7dd" Oct 14 13:52:59 crc kubenswrapper[4725]: E1014 13:52:59.921744 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:53:13 crc kubenswrapper[4725]: I1014 13:53:13.929527 4725 scope.go:117] "RemoveContainer" containerID="970512403f3a724310c7fbfce185ff25273c496de6bf6e9d555198ed6e64e7dd" Oct 14 13:53:13 crc kubenswrapper[4725]: E1014 13:53:13.930773 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:53:28 crc kubenswrapper[4725]: I1014 13:53:28.921938 4725 scope.go:117] "RemoveContainer" containerID="970512403f3a724310c7fbfce185ff25273c496de6bf6e9d555198ed6e64e7dd" Oct 14 13:53:28 crc kubenswrapper[4725]: E1014 13:53:28.923104 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:53:38 crc kubenswrapper[4725]: I1014 13:53:38.951261 4725 generic.go:334] "Generic (PLEG): container finished" podID="6846c1d5-d8cd-40d7-a7b9-080f8f722248" containerID="fe2ca4fb411cd25a1b2b6e1c1e843668cd509e7431cfeb311196286a58e7a2c0" exitCode=0 Oct 14 13:53:38 crc kubenswrapper[4725]: I1014 13:53:38.951403 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz" event={"ID":"6846c1d5-d8cd-40d7-a7b9-080f8f722248","Type":"ContainerDied","Data":"fe2ca4fb411cd25a1b2b6e1c1e843668cd509e7431cfeb311196286a58e7a2c0"} Oct 14 13:53:40 crc kubenswrapper[4725]: I1014 13:53:40.419521 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz" Oct 14 13:53:40 crc kubenswrapper[4725]: I1014 13:53:40.594847 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx5sm\" (UniqueName: \"kubernetes.io/projected/6846c1d5-d8cd-40d7-a7b9-080f8f722248-kube-api-access-kx5sm\") pod \"6846c1d5-d8cd-40d7-a7b9-080f8f722248\" (UID: \"6846c1d5-d8cd-40d7-a7b9-080f8f722248\") " Oct 14 13:53:40 crc kubenswrapper[4725]: I1014 13:53:40.595226 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/6846c1d5-d8cd-40d7-a7b9-080f8f722248-libvirt-secret-0\") pod \"6846c1d5-d8cd-40d7-a7b9-080f8f722248\" (UID: \"6846c1d5-d8cd-40d7-a7b9-080f8f722248\") " Oct 14 13:53:40 crc kubenswrapper[4725]: I1014 13:53:40.595348 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6846c1d5-d8cd-40d7-a7b9-080f8f722248-inventory\") pod \"6846c1d5-d8cd-40d7-a7b9-080f8f722248\" (UID: \"6846c1d5-d8cd-40d7-a7b9-080f8f722248\") " Oct 14 13:53:40 crc kubenswrapper[4725]: I1014 13:53:40.595464 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6846c1d5-d8cd-40d7-a7b9-080f8f722248-libvirt-combined-ca-bundle\") pod \"6846c1d5-d8cd-40d7-a7b9-080f8f722248\" (UID: \"6846c1d5-d8cd-40d7-a7b9-080f8f722248\") " Oct 14 13:53:40 crc kubenswrapper[4725]: I1014 13:53:40.595518 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6846c1d5-d8cd-40d7-a7b9-080f8f722248-ssh-key\") pod \"6846c1d5-d8cd-40d7-a7b9-080f8f722248\" (UID: \"6846c1d5-d8cd-40d7-a7b9-080f8f722248\") " Oct 14 13:53:40 crc kubenswrapper[4725]: I1014 13:53:40.601500 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6846c1d5-d8cd-40d7-a7b9-080f8f722248-kube-api-access-kx5sm" (OuterVolumeSpecName: "kube-api-access-kx5sm") pod "6846c1d5-d8cd-40d7-a7b9-080f8f722248" (UID: "6846c1d5-d8cd-40d7-a7b9-080f8f722248"). InnerVolumeSpecName "kube-api-access-kx5sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:53:40 crc kubenswrapper[4725]: I1014 13:53:40.603126 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6846c1d5-d8cd-40d7-a7b9-080f8f722248-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "6846c1d5-d8cd-40d7-a7b9-080f8f722248" (UID: "6846c1d5-d8cd-40d7-a7b9-080f8f722248"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:53:40 crc kubenswrapper[4725]: I1014 13:53:40.625610 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6846c1d5-d8cd-40d7-a7b9-080f8f722248-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6846c1d5-d8cd-40d7-a7b9-080f8f722248" (UID: "6846c1d5-d8cd-40d7-a7b9-080f8f722248"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:53:40 crc kubenswrapper[4725]: I1014 13:53:40.628047 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6846c1d5-d8cd-40d7-a7b9-080f8f722248-inventory" (OuterVolumeSpecName: "inventory") pod "6846c1d5-d8cd-40d7-a7b9-080f8f722248" (UID: "6846c1d5-d8cd-40d7-a7b9-080f8f722248"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:53:40 crc kubenswrapper[4725]: I1014 13:53:40.629008 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6846c1d5-d8cd-40d7-a7b9-080f8f722248-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "6846c1d5-d8cd-40d7-a7b9-080f8f722248" (UID: "6846c1d5-d8cd-40d7-a7b9-080f8f722248"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:53:40 crc kubenswrapper[4725]: I1014 13:53:40.697264 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6846c1d5-d8cd-40d7-a7b9-080f8f722248-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 13:53:40 crc kubenswrapper[4725]: I1014 13:53:40.697303 4725 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6846c1d5-d8cd-40d7-a7b9-080f8f722248-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:53:40 crc kubenswrapper[4725]: I1014 13:53:40.697317 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6846c1d5-d8cd-40d7-a7b9-080f8f722248-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 13:53:40 crc kubenswrapper[4725]: I1014 13:53:40.697329 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx5sm\" (UniqueName: \"kubernetes.io/projected/6846c1d5-d8cd-40d7-a7b9-080f8f722248-kube-api-access-kx5sm\") on node \"crc\" DevicePath \"\"" Oct 14 13:53:40 crc kubenswrapper[4725]: I1014 13:53:40.697340 4725 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/6846c1d5-d8cd-40d7-a7b9-080f8f722248-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Oct 14 13:53:40 crc kubenswrapper[4725]: I1014 13:53:40.921950 4725 scope.go:117] "RemoveContainer" containerID="970512403f3a724310c7fbfce185ff25273c496de6bf6e9d555198ed6e64e7dd" Oct 14 13:53:40 crc kubenswrapper[4725]: E1014 13:53:40.922702 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:53:40 crc kubenswrapper[4725]: I1014 13:53:40.977902 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz" event={"ID":"6846c1d5-d8cd-40d7-a7b9-080f8f722248","Type":"ContainerDied","Data":"1edce905d02f232f29ae83b5ec2597dc6702e0929bb1f61fed736902d48020c8"} Oct 14 13:53:40 crc kubenswrapper[4725]: I1014 13:53:40.978224 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1edce905d02f232f29ae83b5ec2597dc6702e0929bb1f61fed736902d48020c8" Oct 14 13:53:40 crc kubenswrapper[4725]: I1014 13:53:40.978183 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.057336 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s"] Oct 14 13:53:41 crc kubenswrapper[4725]: E1014 13:53:41.057822 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ded37bba-e75e-4232-9feb-5591b7fadbb9" containerName="extract-content" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.057844 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="ded37bba-e75e-4232-9feb-5591b7fadbb9" containerName="extract-content" Oct 14 13:53:41 crc kubenswrapper[4725]: E1014 13:53:41.057859 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6846c1d5-d8cd-40d7-a7b9-080f8f722248" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.057868 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="6846c1d5-d8cd-40d7-a7b9-080f8f722248" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 14 13:53:41 crc kubenswrapper[4725]: E1014 13:53:41.057889 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ded37bba-e75e-4232-9feb-5591b7fadbb9" containerName="extract-utilities" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.057897 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="ded37bba-e75e-4232-9feb-5591b7fadbb9" containerName="extract-utilities" Oct 14 13:53:41 crc kubenswrapper[4725]: E1014 13:53:41.057915 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ded37bba-e75e-4232-9feb-5591b7fadbb9" containerName="registry-server" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.057922 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="ded37bba-e75e-4232-9feb-5591b7fadbb9" containerName="registry-server" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.058112 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="6846c1d5-d8cd-40d7-a7b9-080f8f722248" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.058143 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="ded37bba-e75e-4232-9feb-5591b7fadbb9" containerName="registry-server" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.058877 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.061640 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qnvbv" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.061824 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.061871 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.061902 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.061837 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.062042 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.062603 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.068088 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s"] Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.206771 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6dd7s\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.206844 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6dd7s\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.206916 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6dd7s\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.206943 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6dd7s\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.207023 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6dd7s\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.207062 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlng5\" (UniqueName: \"kubernetes.io/projected/23eebac6-ae71-4a11-bad6-e58bdcb8d716-kube-api-access-xlng5\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6dd7s\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.207091 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6dd7s\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.207341 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6dd7s\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.207484 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6dd7s\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.309207 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6dd7s\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.309248 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6dd7s\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.309318 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6dd7s\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.309351 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlng5\" (UniqueName: \"kubernetes.io/projected/23eebac6-ae71-4a11-bad6-e58bdcb8d716-kube-api-access-xlng5\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6dd7s\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.309374 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6dd7s\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.309434 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6dd7s\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.309487 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6dd7s\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.309512 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6dd7s\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.309547 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6dd7s\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.310726 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6dd7s\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.312834 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6dd7s\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.312936 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6dd7s\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.313405 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6dd7s\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.313902 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6dd7s\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.314752 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6dd7s\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.315111 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6dd7s\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.315490 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6dd7s\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.334066 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlng5\" (UniqueName: \"kubernetes.io/projected/23eebac6-ae71-4a11-bad6-e58bdcb8d716-kube-api-access-xlng5\") pod \"nova-edpm-deployment-openstack-edpm-ipam-6dd7s\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.381873 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.908052 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s"] Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.909408 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 13:53:41 crc kubenswrapper[4725]: I1014 13:53:41.987474 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" event={"ID":"23eebac6-ae71-4a11-bad6-e58bdcb8d716","Type":"ContainerStarted","Data":"5d7564e5625838e5056fc2c60efa1f0f83e500c2636d74e3d851b4845f570042"} Oct 14 13:53:44 crc kubenswrapper[4725]: I1014 13:53:44.008913 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" event={"ID":"23eebac6-ae71-4a11-bad6-e58bdcb8d716","Type":"ContainerStarted","Data":"a9f2908ef80949a3f210663170a94ec7f009b8b4059bc965847495766ad790d0"} Oct 14 13:53:44 crc kubenswrapper[4725]: I1014 13:53:44.025185 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" podStartSLOduration=2.211046995 podStartE2EDuration="3.025164482s" podCreationTimestamp="2025-10-14 13:53:41 +0000 UTC" firstStartedPulling="2025-10-14 13:53:41.909048491 +0000 UTC m=+2338.757483340" lastFinishedPulling="2025-10-14 13:53:42.723166018 +0000 UTC m=+2339.571600827" observedRunningTime="2025-10-14 13:53:44.024569375 +0000 UTC m=+2340.873004194" watchObservedRunningTime="2025-10-14 13:53:44.025164482 +0000 UTC m=+2340.873599291" Oct 14 13:53:54 crc kubenswrapper[4725]: I1014 13:53:54.921880 4725 scope.go:117] "RemoveContainer" containerID="970512403f3a724310c7fbfce185ff25273c496de6bf6e9d555198ed6e64e7dd" Oct 14 13:53:54 crc kubenswrapper[4725]: E1014 13:53:54.923849 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:54:08 crc kubenswrapper[4725]: I1014 13:54:08.922036 4725 scope.go:117] "RemoveContainer" containerID="970512403f3a724310c7fbfce185ff25273c496de6bf6e9d555198ed6e64e7dd" Oct 14 13:54:08 crc kubenswrapper[4725]: E1014 13:54:08.923014 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:54:23 crc kubenswrapper[4725]: I1014 13:54:23.930150 4725 scope.go:117] "RemoveContainer" containerID="970512403f3a724310c7fbfce185ff25273c496de6bf6e9d555198ed6e64e7dd" Oct 14 13:54:23 crc kubenswrapper[4725]: E1014 13:54:23.931223 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:54:38 crc kubenswrapper[4725]: I1014 13:54:38.920787 4725 scope.go:117] "RemoveContainer" containerID="970512403f3a724310c7fbfce185ff25273c496de6bf6e9d555198ed6e64e7dd" Oct 14 13:54:38 crc kubenswrapper[4725]: E1014 13:54:38.921744 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:54:50 crc kubenswrapper[4725]: I1014 13:54:50.921927 4725 scope.go:117] "RemoveContainer" containerID="970512403f3a724310c7fbfce185ff25273c496de6bf6e9d555198ed6e64e7dd" Oct 14 13:54:50 crc kubenswrapper[4725]: E1014 13:54:50.923050 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:55:01 crc kubenswrapper[4725]: I1014 13:55:01.924168 4725 scope.go:117] "RemoveContainer" containerID="970512403f3a724310c7fbfce185ff25273c496de6bf6e9d555198ed6e64e7dd" Oct 14 13:55:01 crc kubenswrapper[4725]: E1014 13:55:01.924747 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:55:16 crc kubenswrapper[4725]: I1014 13:55:16.921751 4725 scope.go:117] "RemoveContainer" containerID="970512403f3a724310c7fbfce185ff25273c496de6bf6e9d555198ed6e64e7dd" Oct 14 13:55:16 crc kubenswrapper[4725]: E1014 13:55:16.922701 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:55:31 crc kubenswrapper[4725]: I1014 13:55:31.921300 4725 scope.go:117] "RemoveContainer" containerID="970512403f3a724310c7fbfce185ff25273c496de6bf6e9d555198ed6e64e7dd" Oct 14 13:55:31 crc kubenswrapper[4725]: E1014 13:55:31.922147 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:55:42 crc kubenswrapper[4725]: I1014 13:55:42.921576 4725 scope.go:117] "RemoveContainer" containerID="970512403f3a724310c7fbfce185ff25273c496de6bf6e9d555198ed6e64e7dd" Oct 14 13:55:42 crc kubenswrapper[4725]: E1014 13:55:42.922438 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:55:56 crc kubenswrapper[4725]: I1014 13:55:56.922409 4725 scope.go:117] "RemoveContainer" containerID="970512403f3a724310c7fbfce185ff25273c496de6bf6e9d555198ed6e64e7dd" Oct 14 13:55:56 crc kubenswrapper[4725]: E1014 13:55:56.923341 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:56:07 crc kubenswrapper[4725]: I1014 13:56:07.921270 4725 scope.go:117] "RemoveContainer" containerID="970512403f3a724310c7fbfce185ff25273c496de6bf6e9d555198ed6e64e7dd" Oct 14 13:56:07 crc kubenswrapper[4725]: E1014 13:56:07.922319 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:56:21 crc kubenswrapper[4725]: I1014 13:56:21.922445 4725 scope.go:117] "RemoveContainer" containerID="970512403f3a724310c7fbfce185ff25273c496de6bf6e9d555198ed6e64e7dd" Oct 14 13:56:21 crc kubenswrapper[4725]: E1014 13:56:21.923213 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:56:32 crc kubenswrapper[4725]: I1014 13:56:32.921404 4725 scope.go:117] "RemoveContainer" containerID="970512403f3a724310c7fbfce185ff25273c496de6bf6e9d555198ed6e64e7dd" Oct 14 13:56:32 crc kubenswrapper[4725]: E1014 13:56:32.922158 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:56:46 crc kubenswrapper[4725]: I1014 13:56:46.922062 4725 scope.go:117] "RemoveContainer" containerID="970512403f3a724310c7fbfce185ff25273c496de6bf6e9d555198ed6e64e7dd" Oct 14 13:56:46 crc kubenswrapper[4725]: E1014 13:56:46.923859 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:56:57 crc kubenswrapper[4725]: I1014 13:56:57.919194 4725 generic.go:334] "Generic (PLEG): container finished" podID="23eebac6-ae71-4a11-bad6-e58bdcb8d716" containerID="a9f2908ef80949a3f210663170a94ec7f009b8b4059bc965847495766ad790d0" exitCode=0 Oct 14 13:56:57 crc kubenswrapper[4725]: I1014 13:56:57.919300 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" event={"ID":"23eebac6-ae71-4a11-bad6-e58bdcb8d716","Type":"ContainerDied","Data":"a9f2908ef80949a3f210663170a94ec7f009b8b4059bc965847495766ad790d0"} Oct 14 13:56:58 crc kubenswrapper[4725]: I1014 13:56:58.921897 4725 scope.go:117] "RemoveContainer" containerID="970512403f3a724310c7fbfce185ff25273c496de6bf6e9d555198ed6e64e7dd" Oct 14 13:56:58 crc kubenswrapper[4725]: E1014 13:56:58.922842 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:56:59 crc kubenswrapper[4725]: I1014 13:56:59.388312 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" Oct 14 13:56:59 crc kubenswrapper[4725]: I1014 13:56:59.475810 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-cell1-compute-config-0\") pod \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " Oct 14 13:56:59 crc kubenswrapper[4725]: I1014 13:56:59.475921 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-cell1-compute-config-1\") pod \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " Oct 14 13:56:59 crc kubenswrapper[4725]: I1014 13:56:59.475940 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-ssh-key\") pod \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " Oct 14 13:56:59 crc kubenswrapper[4725]: I1014 13:56:59.475966 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-extra-config-0\") pod \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " Oct 14 13:56:59 crc kubenswrapper[4725]: I1014 13:56:59.476060 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-migration-ssh-key-1\") pod \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " Oct 14 13:56:59 crc kubenswrapper[4725]: I1014 13:56:59.476124 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-inventory\") pod \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " Oct 14 13:56:59 crc kubenswrapper[4725]: I1014 13:56:59.476174 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlng5\" (UniqueName: \"kubernetes.io/projected/23eebac6-ae71-4a11-bad6-e58bdcb8d716-kube-api-access-xlng5\") pod \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " Oct 14 13:56:59 crc kubenswrapper[4725]: I1014 13:56:59.476203 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-migration-ssh-key-0\") pod \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " Oct 14 13:56:59 crc kubenswrapper[4725]: I1014 13:56:59.476228 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-combined-ca-bundle\") pod \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\" (UID: \"23eebac6-ae71-4a11-bad6-e58bdcb8d716\") " Oct 14 13:56:59 crc kubenswrapper[4725]: I1014 13:56:59.482202 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "23eebac6-ae71-4a11-bad6-e58bdcb8d716" (UID: "23eebac6-ae71-4a11-bad6-e58bdcb8d716"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:56:59 crc kubenswrapper[4725]: I1014 13:56:59.482386 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23eebac6-ae71-4a11-bad6-e58bdcb8d716-kube-api-access-xlng5" (OuterVolumeSpecName: "kube-api-access-xlng5") pod "23eebac6-ae71-4a11-bad6-e58bdcb8d716" (UID: "23eebac6-ae71-4a11-bad6-e58bdcb8d716"). InnerVolumeSpecName "kube-api-access-xlng5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:56:59 crc kubenswrapper[4725]: I1014 13:56:59.508336 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "23eebac6-ae71-4a11-bad6-e58bdcb8d716" (UID: "23eebac6-ae71-4a11-bad6-e58bdcb8d716"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 13:56:59 crc kubenswrapper[4725]: I1014 13:56:59.508940 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "23eebac6-ae71-4a11-bad6-e58bdcb8d716" (UID: "23eebac6-ae71-4a11-bad6-e58bdcb8d716"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:56:59 crc kubenswrapper[4725]: I1014 13:56:59.511211 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "23eebac6-ae71-4a11-bad6-e58bdcb8d716" (UID: "23eebac6-ae71-4a11-bad6-e58bdcb8d716"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:56:59 crc kubenswrapper[4725]: I1014 13:56:59.512402 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "23eebac6-ae71-4a11-bad6-e58bdcb8d716" (UID: "23eebac6-ae71-4a11-bad6-e58bdcb8d716"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:56:59 crc kubenswrapper[4725]: I1014 13:56:59.512989 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "23eebac6-ae71-4a11-bad6-e58bdcb8d716" (UID: "23eebac6-ae71-4a11-bad6-e58bdcb8d716"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:56:59 crc kubenswrapper[4725]: I1014 13:56:59.513316 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-inventory" (OuterVolumeSpecName: "inventory") pod "23eebac6-ae71-4a11-bad6-e58bdcb8d716" (UID: "23eebac6-ae71-4a11-bad6-e58bdcb8d716"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:56:59 crc kubenswrapper[4725]: I1014 13:56:59.526716 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "23eebac6-ae71-4a11-bad6-e58bdcb8d716" (UID: "23eebac6-ae71-4a11-bad6-e58bdcb8d716"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:56:59 crc kubenswrapper[4725]: I1014 13:56:59.578922 4725 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Oct 14 13:56:59 crc kubenswrapper[4725]: I1014 13:56:59.578956 4725 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Oct 14 13:56:59 crc kubenswrapper[4725]: I1014 13:56:59.578966 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 13:56:59 crc kubenswrapper[4725]: I1014 13:56:59.578977 4725 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Oct 14 13:56:59 crc kubenswrapper[4725]: I1014 13:56:59.578988 4725 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Oct 14 13:56:59 crc kubenswrapper[4725]: I1014 13:56:59.578997 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 13:56:59 crc kubenswrapper[4725]: I1014 13:56:59.579006 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlng5\" (UniqueName: \"kubernetes.io/projected/23eebac6-ae71-4a11-bad6-e58bdcb8d716-kube-api-access-xlng5\") on node \"crc\" DevicePath \"\"" Oct 14 13:56:59 crc kubenswrapper[4725]: I1014 13:56:59.579017 4725 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Oct 14 13:56:59 crc kubenswrapper[4725]: I1014 13:56:59.579025 4725 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23eebac6-ae71-4a11-bad6-e58bdcb8d716-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:56:59 crc kubenswrapper[4725]: I1014 13:56:59.953167 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" event={"ID":"23eebac6-ae71-4a11-bad6-e58bdcb8d716","Type":"ContainerDied","Data":"5d7564e5625838e5056fc2c60efa1f0f83e500c2636d74e3d851b4845f570042"} Oct 14 13:56:59 crc kubenswrapper[4725]: I1014 13:56:59.953572 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d7564e5625838e5056fc2c60efa1f0f83e500c2636d74e3d851b4845f570042" Oct 14 13:56:59 crc kubenswrapper[4725]: I1014 13:56:59.953224 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-6dd7s" Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.047219 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k"] Oct 14 13:57:00 crc kubenswrapper[4725]: E1014 13:57:00.047626 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23eebac6-ae71-4a11-bad6-e58bdcb8d716" containerName="nova-edpm-deployment-openstack-edpm-ipam" Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.047643 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="23eebac6-ae71-4a11-bad6-e58bdcb8d716" containerName="nova-edpm-deployment-openstack-edpm-ipam" Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.047814 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="23eebac6-ae71-4a11-bad6-e58bdcb8d716" containerName="nova-edpm-deployment-openstack-edpm-ipam" Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.048378 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k" Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.053479 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.053878 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.053972 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qnvbv" Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.054029 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.054441 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.062751 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k"] Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.190321 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k\" (UID: \"fd0a86b0-e908-472b-93d0-5eede843a424\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k" Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.190369 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k\" (UID: \"fd0a86b0-e908-472b-93d0-5eede843a424\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k" Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.190533 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k\" (UID: \"fd0a86b0-e908-472b-93d0-5eede843a424\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k" Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.190618 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzdpf\" (UniqueName: \"kubernetes.io/projected/fd0a86b0-e908-472b-93d0-5eede843a424-kube-api-access-gzdpf\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k\" (UID: \"fd0a86b0-e908-472b-93d0-5eede843a424\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k" Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.190652 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k\" (UID: \"fd0a86b0-e908-472b-93d0-5eede843a424\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k" Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.190672 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k\" (UID: \"fd0a86b0-e908-472b-93d0-5eede843a424\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k" Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.190710 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k\" (UID: \"fd0a86b0-e908-472b-93d0-5eede843a424\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k" Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.292300 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzdpf\" (UniqueName: \"kubernetes.io/projected/fd0a86b0-e908-472b-93d0-5eede843a424-kube-api-access-gzdpf\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k\" (UID: \"fd0a86b0-e908-472b-93d0-5eede843a424\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k" Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.292380 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k\" (UID: \"fd0a86b0-e908-472b-93d0-5eede843a424\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k" Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.292420 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k\" (UID: \"fd0a86b0-e908-472b-93d0-5eede843a424\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k" Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.292491 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k\" (UID: \"fd0a86b0-e908-472b-93d0-5eede843a424\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k" Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.292637 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k\" (UID: \"fd0a86b0-e908-472b-93d0-5eede843a424\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k" Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.292670 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k\" (UID: \"fd0a86b0-e908-472b-93d0-5eede843a424\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k" Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.292848 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k\" (UID: \"fd0a86b0-e908-472b-93d0-5eede843a424\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k" Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.297862 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k\" (UID: \"fd0a86b0-e908-472b-93d0-5eede843a424\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k" Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.298226 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k\" (UID: \"fd0a86b0-e908-472b-93d0-5eede843a424\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k" Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.302974 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k\" (UID: \"fd0a86b0-e908-472b-93d0-5eede843a424\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k" Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.303581 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k\" (UID: \"fd0a86b0-e908-472b-93d0-5eede843a424\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k" Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.304873 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k\" (UID: \"fd0a86b0-e908-472b-93d0-5eede843a424\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k" Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.311959 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k\" (UID: \"fd0a86b0-e908-472b-93d0-5eede843a424\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k" Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.325112 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzdpf\" (UniqueName: \"kubernetes.io/projected/fd0a86b0-e908-472b-93d0-5eede843a424-kube-api-access-gzdpf\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k\" (UID: \"fd0a86b0-e908-472b-93d0-5eede843a424\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k" Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.368977 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k" Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.950520 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k"] Oct 14 13:57:00 crc kubenswrapper[4725]: I1014 13:57:00.960595 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k" event={"ID":"fd0a86b0-e908-472b-93d0-5eede843a424","Type":"ContainerStarted","Data":"fd538ebad3b8a14eba84b7dddf44616229b29d287df663340d0f43bd3418eae8"} Oct 14 13:57:01 crc kubenswrapper[4725]: I1014 13:57:01.973750 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k" event={"ID":"fd0a86b0-e908-472b-93d0-5eede843a424","Type":"ContainerStarted","Data":"8065daadb6cc050d8d743357bf71cba7da4d6aa6d7a46903294f25ab7eaa18cc"} Oct 14 13:57:01 crc kubenswrapper[4725]: I1014 13:57:01.996869 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k" podStartSLOduration=1.506691473 podStartE2EDuration="1.996850263s" podCreationTimestamp="2025-10-14 13:57:00 +0000 UTC" firstStartedPulling="2025-10-14 13:57:00.953644621 +0000 UTC m=+2537.802079430" lastFinishedPulling="2025-10-14 13:57:01.443803401 +0000 UTC m=+2538.292238220" observedRunningTime="2025-10-14 13:57:01.995283821 +0000 UTC m=+2538.843718630" watchObservedRunningTime="2025-10-14 13:57:01.996850263 +0000 UTC m=+2538.845285072" Oct 14 13:57:12 crc kubenswrapper[4725]: I1014 13:57:12.920766 4725 scope.go:117] "RemoveContainer" containerID="970512403f3a724310c7fbfce185ff25273c496de6bf6e9d555198ed6e64e7dd" Oct 14 13:57:12 crc kubenswrapper[4725]: E1014 13:57:12.921503 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:57:23 crc kubenswrapper[4725]: I1014 13:57:23.929183 4725 scope.go:117] "RemoveContainer" containerID="970512403f3a724310c7fbfce185ff25273c496de6bf6e9d555198ed6e64e7dd" Oct 14 13:57:23 crc kubenswrapper[4725]: E1014 13:57:23.930286 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 13:57:35 crc kubenswrapper[4725]: I1014 13:57:35.922028 4725 scope.go:117] "RemoveContainer" containerID="970512403f3a724310c7fbfce185ff25273c496de6bf6e9d555198ed6e64e7dd" Oct 14 13:57:36 crc kubenswrapper[4725]: I1014 13:57:36.298156 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" event={"ID":"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c","Type":"ContainerStarted","Data":"a392a4202d6674157990a617c0cb2e4f29e471b98c57f4957a82f7d89a3e7d41"} Oct 14 13:57:46 crc kubenswrapper[4725]: I1014 13:57:46.580756 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7m42n"] Oct 14 13:57:46 crc kubenswrapper[4725]: I1014 13:57:46.583783 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7m42n" Oct 14 13:57:46 crc kubenswrapper[4725]: I1014 13:57:46.596469 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7m42n"] Oct 14 13:57:46 crc kubenswrapper[4725]: I1014 13:57:46.656666 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd05f006-b315-47e5-a225-c349bd047f45-catalog-content\") pod \"certified-operators-7m42n\" (UID: \"dd05f006-b315-47e5-a225-c349bd047f45\") " pod="openshift-marketplace/certified-operators-7m42n" Oct 14 13:57:46 crc kubenswrapper[4725]: I1014 13:57:46.656815 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8lt4\" (UniqueName: \"kubernetes.io/projected/dd05f006-b315-47e5-a225-c349bd047f45-kube-api-access-d8lt4\") pod \"certified-operators-7m42n\" (UID: \"dd05f006-b315-47e5-a225-c349bd047f45\") " pod="openshift-marketplace/certified-operators-7m42n" Oct 14 13:57:46 crc kubenswrapper[4725]: I1014 13:57:46.657037 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd05f006-b315-47e5-a225-c349bd047f45-utilities\") pod \"certified-operators-7m42n\" (UID: \"dd05f006-b315-47e5-a225-c349bd047f45\") " pod="openshift-marketplace/certified-operators-7m42n" Oct 14 13:57:46 crc kubenswrapper[4725]: I1014 13:57:46.758758 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd05f006-b315-47e5-a225-c349bd047f45-catalog-content\") pod \"certified-operators-7m42n\" (UID: \"dd05f006-b315-47e5-a225-c349bd047f45\") " pod="openshift-marketplace/certified-operators-7m42n" Oct 14 13:57:46 crc kubenswrapper[4725]: I1014 13:57:46.758867 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8lt4\" (UniqueName: \"kubernetes.io/projected/dd05f006-b315-47e5-a225-c349bd047f45-kube-api-access-d8lt4\") pod \"certified-operators-7m42n\" (UID: \"dd05f006-b315-47e5-a225-c349bd047f45\") " pod="openshift-marketplace/certified-operators-7m42n" Oct 14 13:57:46 crc kubenswrapper[4725]: I1014 13:57:46.758934 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd05f006-b315-47e5-a225-c349bd047f45-utilities\") pod \"certified-operators-7m42n\" (UID: \"dd05f006-b315-47e5-a225-c349bd047f45\") " pod="openshift-marketplace/certified-operators-7m42n" Oct 14 13:57:46 crc kubenswrapper[4725]: I1014 13:57:46.759309 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd05f006-b315-47e5-a225-c349bd047f45-catalog-content\") pod \"certified-operators-7m42n\" (UID: \"dd05f006-b315-47e5-a225-c349bd047f45\") " pod="openshift-marketplace/certified-operators-7m42n" Oct 14 13:57:46 crc kubenswrapper[4725]: I1014 13:57:46.759389 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd05f006-b315-47e5-a225-c349bd047f45-utilities\") pod \"certified-operators-7m42n\" (UID: \"dd05f006-b315-47e5-a225-c349bd047f45\") " pod="openshift-marketplace/certified-operators-7m42n" Oct 14 13:57:46 crc kubenswrapper[4725]: I1014 13:57:46.781114 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8lt4\" (UniqueName: \"kubernetes.io/projected/dd05f006-b315-47e5-a225-c349bd047f45-kube-api-access-d8lt4\") pod \"certified-operators-7m42n\" (UID: \"dd05f006-b315-47e5-a225-c349bd047f45\") " pod="openshift-marketplace/certified-operators-7m42n" Oct 14 13:57:46 crc kubenswrapper[4725]: I1014 13:57:46.912582 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7m42n" Oct 14 13:57:47 crc kubenswrapper[4725]: I1014 13:57:47.397752 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7m42n"] Oct 14 13:57:48 crc kubenswrapper[4725]: I1014 13:57:48.414356 4725 generic.go:334] "Generic (PLEG): container finished" podID="dd05f006-b315-47e5-a225-c349bd047f45" containerID="e20529d1c30b5886482acd8343ce832ce463a6cff3ea7ad6b27e1eb6534596d9" exitCode=0 Oct 14 13:57:48 crc kubenswrapper[4725]: I1014 13:57:48.414555 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7m42n" event={"ID":"dd05f006-b315-47e5-a225-c349bd047f45","Type":"ContainerDied","Data":"e20529d1c30b5886482acd8343ce832ce463a6cff3ea7ad6b27e1eb6534596d9"} Oct 14 13:57:48 crc kubenswrapper[4725]: I1014 13:57:48.414889 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7m42n" event={"ID":"dd05f006-b315-47e5-a225-c349bd047f45","Type":"ContainerStarted","Data":"04fdb11b77c16f4df96b4007769abd6775c1eb604f241d896b47ec0cf1979364"} Oct 14 13:57:49 crc kubenswrapper[4725]: I1014 13:57:49.427156 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7m42n" event={"ID":"dd05f006-b315-47e5-a225-c349bd047f45","Type":"ContainerStarted","Data":"eead4d4e011ec1705fce9c76935eec7db10db3a37a74bb0232492e7e4daef817"} Oct 14 13:57:50 crc kubenswrapper[4725]: I1014 13:57:50.438214 4725 generic.go:334] "Generic (PLEG): container finished" podID="dd05f006-b315-47e5-a225-c349bd047f45" containerID="eead4d4e011ec1705fce9c76935eec7db10db3a37a74bb0232492e7e4daef817" exitCode=0 Oct 14 13:57:50 crc kubenswrapper[4725]: I1014 13:57:50.438287 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7m42n" event={"ID":"dd05f006-b315-47e5-a225-c349bd047f45","Type":"ContainerDied","Data":"eead4d4e011ec1705fce9c76935eec7db10db3a37a74bb0232492e7e4daef817"} Oct 14 13:57:51 crc kubenswrapper[4725]: I1014 13:57:51.450301 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7m42n" event={"ID":"dd05f006-b315-47e5-a225-c349bd047f45","Type":"ContainerStarted","Data":"6e822ccd60231f713568a77ce108d3b97b12fff148477b3bbdd4ee8399be9b07"} Oct 14 13:57:51 crc kubenswrapper[4725]: I1014 13:57:51.469945 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7m42n" podStartSLOduration=2.8542789710000003 podStartE2EDuration="5.469926061s" podCreationTimestamp="2025-10-14 13:57:46 +0000 UTC" firstStartedPulling="2025-10-14 13:57:48.417266592 +0000 UTC m=+2585.265701411" lastFinishedPulling="2025-10-14 13:57:51.032913652 +0000 UTC m=+2587.881348501" observedRunningTime="2025-10-14 13:57:51.466740776 +0000 UTC m=+2588.315175595" watchObservedRunningTime="2025-10-14 13:57:51.469926061 +0000 UTC m=+2588.318360880" Oct 14 13:57:56 crc kubenswrapper[4725]: I1014 13:57:56.913730 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7m42n" Oct 14 13:57:56 crc kubenswrapper[4725]: I1014 13:57:56.914688 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7m42n" Oct 14 13:57:57 crc kubenswrapper[4725]: I1014 13:57:56.999970 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7m42n" Oct 14 13:57:57 crc kubenswrapper[4725]: I1014 13:57:57.568911 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7m42n" Oct 14 13:57:57 crc kubenswrapper[4725]: I1014 13:57:57.630657 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7m42n"] Oct 14 13:57:59 crc kubenswrapper[4725]: I1014 13:57:59.535068 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7m42n" podUID="dd05f006-b315-47e5-a225-c349bd047f45" containerName="registry-server" containerID="cri-o://6e822ccd60231f713568a77ce108d3b97b12fff148477b3bbdd4ee8399be9b07" gracePeriod=2 Oct 14 13:58:00 crc kubenswrapper[4725]: I1014 13:58:00.013360 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7m42n" Oct 14 13:58:00 crc kubenswrapper[4725]: I1014 13:58:00.127257 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8lt4\" (UniqueName: \"kubernetes.io/projected/dd05f006-b315-47e5-a225-c349bd047f45-kube-api-access-d8lt4\") pod \"dd05f006-b315-47e5-a225-c349bd047f45\" (UID: \"dd05f006-b315-47e5-a225-c349bd047f45\") " Oct 14 13:58:00 crc kubenswrapper[4725]: I1014 13:58:00.127476 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd05f006-b315-47e5-a225-c349bd047f45-utilities\") pod \"dd05f006-b315-47e5-a225-c349bd047f45\" (UID: \"dd05f006-b315-47e5-a225-c349bd047f45\") " Oct 14 13:58:00 crc kubenswrapper[4725]: I1014 13:58:00.127594 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd05f006-b315-47e5-a225-c349bd047f45-catalog-content\") pod \"dd05f006-b315-47e5-a225-c349bd047f45\" (UID: \"dd05f006-b315-47e5-a225-c349bd047f45\") " Oct 14 13:58:00 crc kubenswrapper[4725]: I1014 13:58:00.128308 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd05f006-b315-47e5-a225-c349bd047f45-utilities" (OuterVolumeSpecName: "utilities") pod "dd05f006-b315-47e5-a225-c349bd047f45" (UID: "dd05f006-b315-47e5-a225-c349bd047f45"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:58:00 crc kubenswrapper[4725]: I1014 13:58:00.133289 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd05f006-b315-47e5-a225-c349bd047f45-kube-api-access-d8lt4" (OuterVolumeSpecName: "kube-api-access-d8lt4") pod "dd05f006-b315-47e5-a225-c349bd047f45" (UID: "dd05f006-b315-47e5-a225-c349bd047f45"). InnerVolumeSpecName "kube-api-access-d8lt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:58:00 crc kubenswrapper[4725]: I1014 13:58:00.151288 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8lt4\" (UniqueName: \"kubernetes.io/projected/dd05f006-b315-47e5-a225-c349bd047f45-kube-api-access-d8lt4\") on node \"crc\" DevicePath \"\"" Oct 14 13:58:00 crc kubenswrapper[4725]: I1014 13:58:00.151323 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd05f006-b315-47e5-a225-c349bd047f45-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 13:58:00 crc kubenswrapper[4725]: I1014 13:58:00.188614 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd05f006-b315-47e5-a225-c349bd047f45-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd05f006-b315-47e5-a225-c349bd047f45" (UID: "dd05f006-b315-47e5-a225-c349bd047f45"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:58:00 crc kubenswrapper[4725]: I1014 13:58:00.254821 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd05f006-b315-47e5-a225-c349bd047f45-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 13:58:00 crc kubenswrapper[4725]: I1014 13:58:00.549736 4725 generic.go:334] "Generic (PLEG): container finished" podID="dd05f006-b315-47e5-a225-c349bd047f45" containerID="6e822ccd60231f713568a77ce108d3b97b12fff148477b3bbdd4ee8399be9b07" exitCode=0 Oct 14 13:58:00 crc kubenswrapper[4725]: I1014 13:58:00.549773 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7m42n" event={"ID":"dd05f006-b315-47e5-a225-c349bd047f45","Type":"ContainerDied","Data":"6e822ccd60231f713568a77ce108d3b97b12fff148477b3bbdd4ee8399be9b07"} Oct 14 13:58:00 crc kubenswrapper[4725]: I1014 13:58:00.549799 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7m42n" event={"ID":"dd05f006-b315-47e5-a225-c349bd047f45","Type":"ContainerDied","Data":"04fdb11b77c16f4df96b4007769abd6775c1eb604f241d896b47ec0cf1979364"} Oct 14 13:58:00 crc kubenswrapper[4725]: I1014 13:58:00.549817 4725 scope.go:117] "RemoveContainer" containerID="6e822ccd60231f713568a77ce108d3b97b12fff148477b3bbdd4ee8399be9b07" Oct 14 13:58:00 crc kubenswrapper[4725]: I1014 13:58:00.549946 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7m42n" Oct 14 13:58:00 crc kubenswrapper[4725]: I1014 13:58:00.596661 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7m42n"] Oct 14 13:58:00 crc kubenswrapper[4725]: I1014 13:58:00.596883 4725 scope.go:117] "RemoveContainer" containerID="eead4d4e011ec1705fce9c76935eec7db10db3a37a74bb0232492e7e4daef817" Oct 14 13:58:00 crc kubenswrapper[4725]: I1014 13:58:00.607035 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7m42n"] Oct 14 13:58:00 crc kubenswrapper[4725]: I1014 13:58:00.634800 4725 scope.go:117] "RemoveContainer" containerID="e20529d1c30b5886482acd8343ce832ce463a6cff3ea7ad6b27e1eb6534596d9" Oct 14 13:58:00 crc kubenswrapper[4725]: I1014 13:58:00.671489 4725 scope.go:117] "RemoveContainer" containerID="6e822ccd60231f713568a77ce108d3b97b12fff148477b3bbdd4ee8399be9b07" Oct 14 13:58:00 crc kubenswrapper[4725]: E1014 13:58:00.672510 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e822ccd60231f713568a77ce108d3b97b12fff148477b3bbdd4ee8399be9b07\": container with ID starting with 6e822ccd60231f713568a77ce108d3b97b12fff148477b3bbdd4ee8399be9b07 not found: ID does not exist" containerID="6e822ccd60231f713568a77ce108d3b97b12fff148477b3bbdd4ee8399be9b07" Oct 14 13:58:00 crc kubenswrapper[4725]: I1014 13:58:00.672556 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e822ccd60231f713568a77ce108d3b97b12fff148477b3bbdd4ee8399be9b07"} err="failed to get container status \"6e822ccd60231f713568a77ce108d3b97b12fff148477b3bbdd4ee8399be9b07\": rpc error: code = NotFound desc = could not find container \"6e822ccd60231f713568a77ce108d3b97b12fff148477b3bbdd4ee8399be9b07\": container with ID starting with 6e822ccd60231f713568a77ce108d3b97b12fff148477b3bbdd4ee8399be9b07 not found: ID does not exist" Oct 14 13:58:00 crc kubenswrapper[4725]: I1014 13:58:00.672591 4725 scope.go:117] "RemoveContainer" containerID="eead4d4e011ec1705fce9c76935eec7db10db3a37a74bb0232492e7e4daef817" Oct 14 13:58:00 crc kubenswrapper[4725]: E1014 13:58:00.673077 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eead4d4e011ec1705fce9c76935eec7db10db3a37a74bb0232492e7e4daef817\": container with ID starting with eead4d4e011ec1705fce9c76935eec7db10db3a37a74bb0232492e7e4daef817 not found: ID does not exist" containerID="eead4d4e011ec1705fce9c76935eec7db10db3a37a74bb0232492e7e4daef817" Oct 14 13:58:00 crc kubenswrapper[4725]: I1014 13:58:00.673104 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eead4d4e011ec1705fce9c76935eec7db10db3a37a74bb0232492e7e4daef817"} err="failed to get container status \"eead4d4e011ec1705fce9c76935eec7db10db3a37a74bb0232492e7e4daef817\": rpc error: code = NotFound desc = could not find container \"eead4d4e011ec1705fce9c76935eec7db10db3a37a74bb0232492e7e4daef817\": container with ID starting with eead4d4e011ec1705fce9c76935eec7db10db3a37a74bb0232492e7e4daef817 not found: ID does not exist" Oct 14 13:58:00 crc kubenswrapper[4725]: I1014 13:58:00.673120 4725 scope.go:117] "RemoveContainer" containerID="e20529d1c30b5886482acd8343ce832ce463a6cff3ea7ad6b27e1eb6534596d9" Oct 14 13:58:00 crc kubenswrapper[4725]: E1014 13:58:00.673596 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e20529d1c30b5886482acd8343ce832ce463a6cff3ea7ad6b27e1eb6534596d9\": container with ID starting with e20529d1c30b5886482acd8343ce832ce463a6cff3ea7ad6b27e1eb6534596d9 not found: ID does not exist" containerID="e20529d1c30b5886482acd8343ce832ce463a6cff3ea7ad6b27e1eb6534596d9" Oct 14 13:58:00 crc kubenswrapper[4725]: I1014 13:58:00.673623 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e20529d1c30b5886482acd8343ce832ce463a6cff3ea7ad6b27e1eb6534596d9"} err="failed to get container status \"e20529d1c30b5886482acd8343ce832ce463a6cff3ea7ad6b27e1eb6534596d9\": rpc error: code = NotFound desc = could not find container \"e20529d1c30b5886482acd8343ce832ce463a6cff3ea7ad6b27e1eb6534596d9\": container with ID starting with e20529d1c30b5886482acd8343ce832ce463a6cff3ea7ad6b27e1eb6534596d9 not found: ID does not exist" Oct 14 13:58:01 crc kubenswrapper[4725]: I1014 13:58:01.940311 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd05f006-b315-47e5-a225-c349bd047f45" path="/var/lib/kubelet/pods/dd05f006-b315-47e5-a225-c349bd047f45/volumes" Oct 14 13:59:18 crc kubenswrapper[4725]: I1014 13:59:18.620365 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5fnt2"] Oct 14 13:59:18 crc kubenswrapper[4725]: E1014 13:59:18.621648 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd05f006-b315-47e5-a225-c349bd047f45" containerName="extract-content" Oct 14 13:59:18 crc kubenswrapper[4725]: I1014 13:59:18.621671 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd05f006-b315-47e5-a225-c349bd047f45" containerName="extract-content" Oct 14 13:59:18 crc kubenswrapper[4725]: E1014 13:59:18.621693 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd05f006-b315-47e5-a225-c349bd047f45" containerName="registry-server" Oct 14 13:59:18 crc kubenswrapper[4725]: I1014 13:59:18.621705 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd05f006-b315-47e5-a225-c349bd047f45" containerName="registry-server" Oct 14 13:59:18 crc kubenswrapper[4725]: E1014 13:59:18.621757 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd05f006-b315-47e5-a225-c349bd047f45" containerName="extract-utilities" Oct 14 13:59:18 crc kubenswrapper[4725]: I1014 13:59:18.621771 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd05f006-b315-47e5-a225-c349bd047f45" containerName="extract-utilities" Oct 14 13:59:18 crc kubenswrapper[4725]: I1014 13:59:18.622146 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd05f006-b315-47e5-a225-c349bd047f45" containerName="registry-server" Oct 14 13:59:18 crc kubenswrapper[4725]: I1014 13:59:18.624548 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5fnt2" Oct 14 13:59:18 crc kubenswrapper[4725]: I1014 13:59:18.637635 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5fnt2"] Oct 14 13:59:18 crc kubenswrapper[4725]: I1014 13:59:18.710899 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jjgb\" (UniqueName: \"kubernetes.io/projected/3a8b81da-4a1d-4c01-9618-7df74ae186e4-kube-api-access-8jjgb\") pod \"redhat-marketplace-5fnt2\" (UID: \"3a8b81da-4a1d-4c01-9618-7df74ae186e4\") " pod="openshift-marketplace/redhat-marketplace-5fnt2" Oct 14 13:59:18 crc kubenswrapper[4725]: I1014 13:59:18.711361 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8b81da-4a1d-4c01-9618-7df74ae186e4-utilities\") pod \"redhat-marketplace-5fnt2\" (UID: \"3a8b81da-4a1d-4c01-9618-7df74ae186e4\") " pod="openshift-marketplace/redhat-marketplace-5fnt2" Oct 14 13:59:18 crc kubenswrapper[4725]: I1014 13:59:18.711536 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8b81da-4a1d-4c01-9618-7df74ae186e4-catalog-content\") pod \"redhat-marketplace-5fnt2\" (UID: \"3a8b81da-4a1d-4c01-9618-7df74ae186e4\") " pod="openshift-marketplace/redhat-marketplace-5fnt2" Oct 14 13:59:18 crc kubenswrapper[4725]: I1014 13:59:18.813751 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8b81da-4a1d-4c01-9618-7df74ae186e4-utilities\") pod \"redhat-marketplace-5fnt2\" (UID: \"3a8b81da-4a1d-4c01-9618-7df74ae186e4\") " pod="openshift-marketplace/redhat-marketplace-5fnt2" Oct 14 13:59:18 crc kubenswrapper[4725]: I1014 13:59:18.813818 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8b81da-4a1d-4c01-9618-7df74ae186e4-catalog-content\") pod \"redhat-marketplace-5fnt2\" (UID: \"3a8b81da-4a1d-4c01-9618-7df74ae186e4\") " pod="openshift-marketplace/redhat-marketplace-5fnt2" Oct 14 13:59:18 crc kubenswrapper[4725]: I1014 13:59:18.813858 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jjgb\" (UniqueName: \"kubernetes.io/projected/3a8b81da-4a1d-4c01-9618-7df74ae186e4-kube-api-access-8jjgb\") pod \"redhat-marketplace-5fnt2\" (UID: \"3a8b81da-4a1d-4c01-9618-7df74ae186e4\") " pod="openshift-marketplace/redhat-marketplace-5fnt2" Oct 14 13:59:18 crc kubenswrapper[4725]: I1014 13:59:18.814351 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8b81da-4a1d-4c01-9618-7df74ae186e4-utilities\") pod \"redhat-marketplace-5fnt2\" (UID: \"3a8b81da-4a1d-4c01-9618-7df74ae186e4\") " pod="openshift-marketplace/redhat-marketplace-5fnt2" Oct 14 13:59:18 crc kubenswrapper[4725]: I1014 13:59:18.814593 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8b81da-4a1d-4c01-9618-7df74ae186e4-catalog-content\") pod \"redhat-marketplace-5fnt2\" (UID: \"3a8b81da-4a1d-4c01-9618-7df74ae186e4\") " pod="openshift-marketplace/redhat-marketplace-5fnt2" Oct 14 13:59:18 crc kubenswrapper[4725]: I1014 13:59:18.833149 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jjgb\" (UniqueName: \"kubernetes.io/projected/3a8b81da-4a1d-4c01-9618-7df74ae186e4-kube-api-access-8jjgb\") pod \"redhat-marketplace-5fnt2\" (UID: \"3a8b81da-4a1d-4c01-9618-7df74ae186e4\") " pod="openshift-marketplace/redhat-marketplace-5fnt2" Oct 14 13:59:18 crc kubenswrapper[4725]: I1014 13:59:18.973544 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5fnt2" Oct 14 13:59:19 crc kubenswrapper[4725]: I1014 13:59:19.416228 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5fnt2"] Oct 14 13:59:20 crc kubenswrapper[4725]: I1014 13:59:20.372653 4725 generic.go:334] "Generic (PLEG): container finished" podID="3a8b81da-4a1d-4c01-9618-7df74ae186e4" containerID="dc203d7bf83a97fac97af1ee82c0ad40072de67693990d7040c9fc7a00dd8ba6" exitCode=0 Oct 14 13:59:20 crc kubenswrapper[4725]: I1014 13:59:20.372709 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5fnt2" event={"ID":"3a8b81da-4a1d-4c01-9618-7df74ae186e4","Type":"ContainerDied","Data":"dc203d7bf83a97fac97af1ee82c0ad40072de67693990d7040c9fc7a00dd8ba6"} Oct 14 13:59:20 crc kubenswrapper[4725]: I1014 13:59:20.372943 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5fnt2" event={"ID":"3a8b81da-4a1d-4c01-9618-7df74ae186e4","Type":"ContainerStarted","Data":"551c5d10af1c18fb1a2cd23bc0db1fa2fff5e5e8944a080c9e7cf0155968a925"} Oct 14 13:59:20 crc kubenswrapper[4725]: I1014 13:59:20.374855 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 13:59:21 crc kubenswrapper[4725]: I1014 13:59:21.384009 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5fnt2" event={"ID":"3a8b81da-4a1d-4c01-9618-7df74ae186e4","Type":"ContainerStarted","Data":"b2e045b3a02edcc94e128ecb4c68ca24a49073b9d1c13f54563b13ea9e02a938"} Oct 14 13:59:22 crc kubenswrapper[4725]: I1014 13:59:22.417871 4725 generic.go:334] "Generic (PLEG): container finished" podID="3a8b81da-4a1d-4c01-9618-7df74ae186e4" containerID="b2e045b3a02edcc94e128ecb4c68ca24a49073b9d1c13f54563b13ea9e02a938" exitCode=0 Oct 14 13:59:22 crc kubenswrapper[4725]: I1014 13:59:22.418176 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5fnt2" event={"ID":"3a8b81da-4a1d-4c01-9618-7df74ae186e4","Type":"ContainerDied","Data":"b2e045b3a02edcc94e128ecb4c68ca24a49073b9d1c13f54563b13ea9e02a938"} Oct 14 13:59:23 crc kubenswrapper[4725]: I1014 13:59:23.427028 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5fnt2" event={"ID":"3a8b81da-4a1d-4c01-9618-7df74ae186e4","Type":"ContainerStarted","Data":"0e0e518ddf4fa7b733409b024a76a4d7ba068857b8d9172d2004f5dd5d155e00"} Oct 14 13:59:23 crc kubenswrapper[4725]: I1014 13:59:23.449342 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5fnt2" podStartSLOduration=2.954369928 podStartE2EDuration="5.449325262s" podCreationTimestamp="2025-10-14 13:59:18 +0000 UTC" firstStartedPulling="2025-10-14 13:59:20.374618599 +0000 UTC m=+2677.223053408" lastFinishedPulling="2025-10-14 13:59:22.869573923 +0000 UTC m=+2679.718008742" observedRunningTime="2025-10-14 13:59:23.443040183 +0000 UTC m=+2680.291475002" watchObservedRunningTime="2025-10-14 13:59:23.449325262 +0000 UTC m=+2680.297760071" Oct 14 13:59:26 crc kubenswrapper[4725]: I1014 13:59:26.463525 4725 generic.go:334] "Generic (PLEG): container finished" podID="fd0a86b0-e908-472b-93d0-5eede843a424" containerID="8065daadb6cc050d8d743357bf71cba7da4d6aa6d7a46903294f25ab7eaa18cc" exitCode=0 Oct 14 13:59:26 crc kubenswrapper[4725]: I1014 13:59:26.463579 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k" event={"ID":"fd0a86b0-e908-472b-93d0-5eede843a424","Type":"ContainerDied","Data":"8065daadb6cc050d8d743357bf71cba7da4d6aa6d7a46903294f25ab7eaa18cc"} Oct 14 13:59:27 crc kubenswrapper[4725]: I1014 13:59:27.966554 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k" Oct 14 13:59:28 crc kubenswrapper[4725]: I1014 13:59:28.091146 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-inventory\") pod \"fd0a86b0-e908-472b-93d0-5eede843a424\" (UID: \"fd0a86b0-e908-472b-93d0-5eede843a424\") " Oct 14 13:59:28 crc kubenswrapper[4725]: I1014 13:59:28.091192 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-ceilometer-compute-config-data-1\") pod \"fd0a86b0-e908-472b-93d0-5eede843a424\" (UID: \"fd0a86b0-e908-472b-93d0-5eede843a424\") " Oct 14 13:59:28 crc kubenswrapper[4725]: I1014 13:59:28.091249 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzdpf\" (UniqueName: \"kubernetes.io/projected/fd0a86b0-e908-472b-93d0-5eede843a424-kube-api-access-gzdpf\") pod \"fd0a86b0-e908-472b-93d0-5eede843a424\" (UID: \"fd0a86b0-e908-472b-93d0-5eede843a424\") " Oct 14 13:59:28 crc kubenswrapper[4725]: I1014 13:59:28.091296 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-ssh-key\") pod \"fd0a86b0-e908-472b-93d0-5eede843a424\" (UID: \"fd0a86b0-e908-472b-93d0-5eede843a424\") " Oct 14 13:59:28 crc kubenswrapper[4725]: I1014 13:59:28.091341 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-ceilometer-compute-config-data-2\") pod \"fd0a86b0-e908-472b-93d0-5eede843a424\" (UID: \"fd0a86b0-e908-472b-93d0-5eede843a424\") " Oct 14 13:59:28 crc kubenswrapper[4725]: I1014 13:59:28.091380 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-telemetry-combined-ca-bundle\") pod \"fd0a86b0-e908-472b-93d0-5eede843a424\" (UID: \"fd0a86b0-e908-472b-93d0-5eede843a424\") " Oct 14 13:59:28 crc kubenswrapper[4725]: I1014 13:59:28.091481 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-ceilometer-compute-config-data-0\") pod \"fd0a86b0-e908-472b-93d0-5eede843a424\" (UID: \"fd0a86b0-e908-472b-93d0-5eede843a424\") " Oct 14 13:59:28 crc kubenswrapper[4725]: I1014 13:59:28.097616 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd0a86b0-e908-472b-93d0-5eede843a424-kube-api-access-gzdpf" (OuterVolumeSpecName: "kube-api-access-gzdpf") pod "fd0a86b0-e908-472b-93d0-5eede843a424" (UID: "fd0a86b0-e908-472b-93d0-5eede843a424"). InnerVolumeSpecName "kube-api-access-gzdpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:59:28 crc kubenswrapper[4725]: I1014 13:59:28.097784 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "fd0a86b0-e908-472b-93d0-5eede843a424" (UID: "fd0a86b0-e908-472b-93d0-5eede843a424"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:59:28 crc kubenswrapper[4725]: I1014 13:59:28.120174 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "fd0a86b0-e908-472b-93d0-5eede843a424" (UID: "fd0a86b0-e908-472b-93d0-5eede843a424"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:59:28 crc kubenswrapper[4725]: I1014 13:59:28.123686 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fd0a86b0-e908-472b-93d0-5eede843a424" (UID: "fd0a86b0-e908-472b-93d0-5eede843a424"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:59:28 crc kubenswrapper[4725]: I1014 13:59:28.124558 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "fd0a86b0-e908-472b-93d0-5eede843a424" (UID: "fd0a86b0-e908-472b-93d0-5eede843a424"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:59:28 crc kubenswrapper[4725]: I1014 13:59:28.127835 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-inventory" (OuterVolumeSpecName: "inventory") pod "fd0a86b0-e908-472b-93d0-5eede843a424" (UID: "fd0a86b0-e908-472b-93d0-5eede843a424"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:59:28 crc kubenswrapper[4725]: I1014 13:59:28.128857 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "fd0a86b0-e908-472b-93d0-5eede843a424" (UID: "fd0a86b0-e908-472b-93d0-5eede843a424"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 13:59:28 crc kubenswrapper[4725]: I1014 13:59:28.194514 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzdpf\" (UniqueName: \"kubernetes.io/projected/fd0a86b0-e908-472b-93d0-5eede843a424-kube-api-access-gzdpf\") on node \"crc\" DevicePath \"\"" Oct 14 13:59:28 crc kubenswrapper[4725]: I1014 13:59:28.194584 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 13:59:28 crc kubenswrapper[4725]: I1014 13:59:28.194615 4725 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Oct 14 13:59:28 crc kubenswrapper[4725]: I1014 13:59:28.194637 4725 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 13:59:28 crc kubenswrapper[4725]: I1014 13:59:28.194656 4725 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Oct 14 13:59:28 crc kubenswrapper[4725]: I1014 13:59:28.194673 4725 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-inventory\") on node \"crc\" DevicePath \"\"" Oct 14 13:59:28 crc kubenswrapper[4725]: I1014 13:59:28.194693 4725 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fd0a86b0-e908-472b-93d0-5eede843a424-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Oct 14 13:59:28 crc kubenswrapper[4725]: I1014 13:59:28.492975 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k" event={"ID":"fd0a86b0-e908-472b-93d0-5eede843a424","Type":"ContainerDied","Data":"fd538ebad3b8a14eba84b7dddf44616229b29d287df663340d0f43bd3418eae8"} Oct 14 13:59:28 crc kubenswrapper[4725]: I1014 13:59:28.493433 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd538ebad3b8a14eba84b7dddf44616229b29d287df663340d0f43bd3418eae8" Oct 14 13:59:28 crc kubenswrapper[4725]: I1014 13:59:28.493211 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k" Oct 14 13:59:28 crc kubenswrapper[4725]: I1014 13:59:28.974614 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5fnt2" Oct 14 13:59:28 crc kubenswrapper[4725]: I1014 13:59:28.974903 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5fnt2" Oct 14 13:59:29 crc kubenswrapper[4725]: I1014 13:59:29.029329 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5fnt2" Oct 14 13:59:29 crc kubenswrapper[4725]: I1014 13:59:29.568720 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5fnt2" Oct 14 13:59:29 crc kubenswrapper[4725]: I1014 13:59:29.618666 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5fnt2"] Oct 14 13:59:31 crc kubenswrapper[4725]: I1014 13:59:31.524724 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5fnt2" podUID="3a8b81da-4a1d-4c01-9618-7df74ae186e4" containerName="registry-server" containerID="cri-o://0e0e518ddf4fa7b733409b024a76a4d7ba068857b8d9172d2004f5dd5d155e00" gracePeriod=2 Oct 14 13:59:32 crc kubenswrapper[4725]: I1014 13:59:32.034943 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5fnt2" Oct 14 13:59:32 crc kubenswrapper[4725]: I1014 13:59:32.174723 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jjgb\" (UniqueName: \"kubernetes.io/projected/3a8b81da-4a1d-4c01-9618-7df74ae186e4-kube-api-access-8jjgb\") pod \"3a8b81da-4a1d-4c01-9618-7df74ae186e4\" (UID: \"3a8b81da-4a1d-4c01-9618-7df74ae186e4\") " Oct 14 13:59:32 crc kubenswrapper[4725]: I1014 13:59:32.175039 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8b81da-4a1d-4c01-9618-7df74ae186e4-catalog-content\") pod \"3a8b81da-4a1d-4c01-9618-7df74ae186e4\" (UID: \"3a8b81da-4a1d-4c01-9618-7df74ae186e4\") " Oct 14 13:59:32 crc kubenswrapper[4725]: I1014 13:59:32.175092 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8b81da-4a1d-4c01-9618-7df74ae186e4-utilities\") pod \"3a8b81da-4a1d-4c01-9618-7df74ae186e4\" (UID: \"3a8b81da-4a1d-4c01-9618-7df74ae186e4\") " Oct 14 13:59:32 crc kubenswrapper[4725]: I1014 13:59:32.176045 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a8b81da-4a1d-4c01-9618-7df74ae186e4-utilities" (OuterVolumeSpecName: "utilities") pod "3a8b81da-4a1d-4c01-9618-7df74ae186e4" (UID: "3a8b81da-4a1d-4c01-9618-7df74ae186e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:59:32 crc kubenswrapper[4725]: I1014 13:59:32.179927 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a8b81da-4a1d-4c01-9618-7df74ae186e4-kube-api-access-8jjgb" (OuterVolumeSpecName: "kube-api-access-8jjgb") pod "3a8b81da-4a1d-4c01-9618-7df74ae186e4" (UID: "3a8b81da-4a1d-4c01-9618-7df74ae186e4"). InnerVolumeSpecName "kube-api-access-8jjgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 13:59:32 crc kubenswrapper[4725]: I1014 13:59:32.189434 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a8b81da-4a1d-4c01-9618-7df74ae186e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a8b81da-4a1d-4c01-9618-7df74ae186e4" (UID: "3a8b81da-4a1d-4c01-9618-7df74ae186e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 13:59:32 crc kubenswrapper[4725]: I1014 13:59:32.277202 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8b81da-4a1d-4c01-9618-7df74ae186e4-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 13:59:32 crc kubenswrapper[4725]: I1014 13:59:32.277234 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8b81da-4a1d-4c01-9618-7df74ae186e4-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 13:59:32 crc kubenswrapper[4725]: I1014 13:59:32.277244 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jjgb\" (UniqueName: \"kubernetes.io/projected/3a8b81da-4a1d-4c01-9618-7df74ae186e4-kube-api-access-8jjgb\") on node \"crc\" DevicePath \"\"" Oct 14 13:59:32 crc kubenswrapper[4725]: I1014 13:59:32.535637 4725 generic.go:334] "Generic (PLEG): container finished" podID="3a8b81da-4a1d-4c01-9618-7df74ae186e4" containerID="0e0e518ddf4fa7b733409b024a76a4d7ba068857b8d9172d2004f5dd5d155e00" exitCode=0 Oct 14 13:59:32 crc kubenswrapper[4725]: I1014 13:59:32.535683 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5fnt2" Oct 14 13:59:32 crc kubenswrapper[4725]: I1014 13:59:32.535702 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5fnt2" event={"ID":"3a8b81da-4a1d-4c01-9618-7df74ae186e4","Type":"ContainerDied","Data":"0e0e518ddf4fa7b733409b024a76a4d7ba068857b8d9172d2004f5dd5d155e00"} Oct 14 13:59:32 crc kubenswrapper[4725]: I1014 13:59:32.535986 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5fnt2" event={"ID":"3a8b81da-4a1d-4c01-9618-7df74ae186e4","Type":"ContainerDied","Data":"551c5d10af1c18fb1a2cd23bc0db1fa2fff5e5e8944a080c9e7cf0155968a925"} Oct 14 13:59:32 crc kubenswrapper[4725]: I1014 13:59:32.536006 4725 scope.go:117] "RemoveContainer" containerID="0e0e518ddf4fa7b733409b024a76a4d7ba068857b8d9172d2004f5dd5d155e00" Oct 14 13:59:32 crc kubenswrapper[4725]: I1014 13:59:32.576582 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5fnt2"] Oct 14 13:59:32 crc kubenswrapper[4725]: I1014 13:59:32.577733 4725 scope.go:117] "RemoveContainer" containerID="b2e045b3a02edcc94e128ecb4c68ca24a49073b9d1c13f54563b13ea9e02a938" Oct 14 13:59:32 crc kubenswrapper[4725]: I1014 13:59:32.585493 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5fnt2"] Oct 14 13:59:32 crc kubenswrapper[4725]: I1014 13:59:32.609359 4725 scope.go:117] "RemoveContainer" containerID="dc203d7bf83a97fac97af1ee82c0ad40072de67693990d7040c9fc7a00dd8ba6" Oct 14 13:59:32 crc kubenswrapper[4725]: I1014 13:59:32.644502 4725 scope.go:117] "RemoveContainer" containerID="0e0e518ddf4fa7b733409b024a76a4d7ba068857b8d9172d2004f5dd5d155e00" Oct 14 13:59:32 crc kubenswrapper[4725]: E1014 13:59:32.644941 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e0e518ddf4fa7b733409b024a76a4d7ba068857b8d9172d2004f5dd5d155e00\": container with ID starting with 0e0e518ddf4fa7b733409b024a76a4d7ba068857b8d9172d2004f5dd5d155e00 not found: ID does not exist" containerID="0e0e518ddf4fa7b733409b024a76a4d7ba068857b8d9172d2004f5dd5d155e00" Oct 14 13:59:32 crc kubenswrapper[4725]: I1014 13:59:32.644996 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e0e518ddf4fa7b733409b024a76a4d7ba068857b8d9172d2004f5dd5d155e00"} err="failed to get container status \"0e0e518ddf4fa7b733409b024a76a4d7ba068857b8d9172d2004f5dd5d155e00\": rpc error: code = NotFound desc = could not find container \"0e0e518ddf4fa7b733409b024a76a4d7ba068857b8d9172d2004f5dd5d155e00\": container with ID starting with 0e0e518ddf4fa7b733409b024a76a4d7ba068857b8d9172d2004f5dd5d155e00 not found: ID does not exist" Oct 14 13:59:32 crc kubenswrapper[4725]: I1014 13:59:32.645030 4725 scope.go:117] "RemoveContainer" containerID="b2e045b3a02edcc94e128ecb4c68ca24a49073b9d1c13f54563b13ea9e02a938" Oct 14 13:59:32 crc kubenswrapper[4725]: E1014 13:59:32.645377 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2e045b3a02edcc94e128ecb4c68ca24a49073b9d1c13f54563b13ea9e02a938\": container with ID starting with b2e045b3a02edcc94e128ecb4c68ca24a49073b9d1c13f54563b13ea9e02a938 not found: ID does not exist" containerID="b2e045b3a02edcc94e128ecb4c68ca24a49073b9d1c13f54563b13ea9e02a938" Oct 14 13:59:32 crc kubenswrapper[4725]: I1014 13:59:32.645416 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2e045b3a02edcc94e128ecb4c68ca24a49073b9d1c13f54563b13ea9e02a938"} err="failed to get container status \"b2e045b3a02edcc94e128ecb4c68ca24a49073b9d1c13f54563b13ea9e02a938\": rpc error: code = NotFound desc = could not find container \"b2e045b3a02edcc94e128ecb4c68ca24a49073b9d1c13f54563b13ea9e02a938\": container with ID starting with b2e045b3a02edcc94e128ecb4c68ca24a49073b9d1c13f54563b13ea9e02a938 not found: ID does not exist" Oct 14 13:59:32 crc kubenswrapper[4725]: I1014 13:59:32.645464 4725 scope.go:117] "RemoveContainer" containerID="dc203d7bf83a97fac97af1ee82c0ad40072de67693990d7040c9fc7a00dd8ba6" Oct 14 13:59:32 crc kubenswrapper[4725]: E1014 13:59:32.645805 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc203d7bf83a97fac97af1ee82c0ad40072de67693990d7040c9fc7a00dd8ba6\": container with ID starting with dc203d7bf83a97fac97af1ee82c0ad40072de67693990d7040c9fc7a00dd8ba6 not found: ID does not exist" containerID="dc203d7bf83a97fac97af1ee82c0ad40072de67693990d7040c9fc7a00dd8ba6" Oct 14 13:59:32 crc kubenswrapper[4725]: I1014 13:59:32.645840 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc203d7bf83a97fac97af1ee82c0ad40072de67693990d7040c9fc7a00dd8ba6"} err="failed to get container status \"dc203d7bf83a97fac97af1ee82c0ad40072de67693990d7040c9fc7a00dd8ba6\": rpc error: code = NotFound desc = could not find container \"dc203d7bf83a97fac97af1ee82c0ad40072de67693990d7040c9fc7a00dd8ba6\": container with ID starting with dc203d7bf83a97fac97af1ee82c0ad40072de67693990d7040c9fc7a00dd8ba6 not found: ID does not exist" Oct 14 13:59:33 crc kubenswrapper[4725]: I1014 13:59:33.936319 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a8b81da-4a1d-4c01-9618-7df74ae186e4" path="/var/lib/kubelet/pods/3a8b81da-4a1d-4c01-9618-7df74ae186e4/volumes" Oct 14 14:00:00 crc kubenswrapper[4725]: I1014 14:00:00.149517 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340840-qbkjn"] Oct 14 14:00:00 crc kubenswrapper[4725]: E1014 14:00:00.150337 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8b81da-4a1d-4c01-9618-7df74ae186e4" containerName="extract-content" Oct 14 14:00:00 crc kubenswrapper[4725]: I1014 14:00:00.150350 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8b81da-4a1d-4c01-9618-7df74ae186e4" containerName="extract-content" Oct 14 14:00:00 crc kubenswrapper[4725]: E1014 14:00:00.150363 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8b81da-4a1d-4c01-9618-7df74ae186e4" containerName="registry-server" Oct 14 14:00:00 crc kubenswrapper[4725]: I1014 14:00:00.150368 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8b81da-4a1d-4c01-9618-7df74ae186e4" containerName="registry-server" Oct 14 14:00:00 crc kubenswrapper[4725]: E1014 14:00:00.150384 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd0a86b0-e908-472b-93d0-5eede843a424" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Oct 14 14:00:00 crc kubenswrapper[4725]: I1014 14:00:00.150393 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd0a86b0-e908-472b-93d0-5eede843a424" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Oct 14 14:00:00 crc kubenswrapper[4725]: E1014 14:00:00.150422 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8b81da-4a1d-4c01-9618-7df74ae186e4" containerName="extract-utilities" Oct 14 14:00:00 crc kubenswrapper[4725]: I1014 14:00:00.150428 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8b81da-4a1d-4c01-9618-7df74ae186e4" containerName="extract-utilities" Oct 14 14:00:00 crc kubenswrapper[4725]: I1014 14:00:00.150620 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a8b81da-4a1d-4c01-9618-7df74ae186e4" containerName="registry-server" Oct 14 14:00:00 crc kubenswrapper[4725]: I1014 14:00:00.150633 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd0a86b0-e908-472b-93d0-5eede843a424" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Oct 14 14:00:00 crc kubenswrapper[4725]: I1014 14:00:00.151199 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340840-qbkjn" Oct 14 14:00:00 crc kubenswrapper[4725]: I1014 14:00:00.155123 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 14:00:00 crc kubenswrapper[4725]: I1014 14:00:00.155562 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 14:00:00 crc kubenswrapper[4725]: I1014 14:00:00.157971 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340840-qbkjn"] Oct 14 14:00:00 crc kubenswrapper[4725]: I1014 14:00:00.262005 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n4bd\" (UniqueName: \"kubernetes.io/projected/66f01db6-1227-411d-8af1-df1f29e35491-kube-api-access-8n4bd\") pod \"collect-profiles-29340840-qbkjn\" (UID: \"66f01db6-1227-411d-8af1-df1f29e35491\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340840-qbkjn" Oct 14 14:00:00 crc kubenswrapper[4725]: I1014 14:00:00.262288 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66f01db6-1227-411d-8af1-df1f29e35491-config-volume\") pod \"collect-profiles-29340840-qbkjn\" (UID: \"66f01db6-1227-411d-8af1-df1f29e35491\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340840-qbkjn" Oct 14 14:00:00 crc kubenswrapper[4725]: I1014 14:00:00.262554 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66f01db6-1227-411d-8af1-df1f29e35491-secret-volume\") pod \"collect-profiles-29340840-qbkjn\" (UID: \"66f01db6-1227-411d-8af1-df1f29e35491\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340840-qbkjn" Oct 14 14:00:00 crc kubenswrapper[4725]: I1014 14:00:00.367848 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66f01db6-1227-411d-8af1-df1f29e35491-config-volume\") pod \"collect-profiles-29340840-qbkjn\" (UID: \"66f01db6-1227-411d-8af1-df1f29e35491\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340840-qbkjn" Oct 14 14:00:00 crc kubenswrapper[4725]: I1014 14:00:00.367925 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66f01db6-1227-411d-8af1-df1f29e35491-secret-volume\") pod \"collect-profiles-29340840-qbkjn\" (UID: \"66f01db6-1227-411d-8af1-df1f29e35491\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340840-qbkjn" Oct 14 14:00:00 crc kubenswrapper[4725]: I1014 14:00:00.368017 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n4bd\" (UniqueName: \"kubernetes.io/projected/66f01db6-1227-411d-8af1-df1f29e35491-kube-api-access-8n4bd\") pod \"collect-profiles-29340840-qbkjn\" (UID: \"66f01db6-1227-411d-8af1-df1f29e35491\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340840-qbkjn" Oct 14 14:00:00 crc kubenswrapper[4725]: I1014 14:00:00.371010 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66f01db6-1227-411d-8af1-df1f29e35491-config-volume\") pod \"collect-profiles-29340840-qbkjn\" (UID: \"66f01db6-1227-411d-8af1-df1f29e35491\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340840-qbkjn" Oct 14 14:00:00 crc kubenswrapper[4725]: I1014 14:00:00.385709 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66f01db6-1227-411d-8af1-df1f29e35491-secret-volume\") pod \"collect-profiles-29340840-qbkjn\" (UID: \"66f01db6-1227-411d-8af1-df1f29e35491\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340840-qbkjn" Oct 14 14:00:00 crc kubenswrapper[4725]: I1014 14:00:00.391063 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n4bd\" (UniqueName: \"kubernetes.io/projected/66f01db6-1227-411d-8af1-df1f29e35491-kube-api-access-8n4bd\") pod \"collect-profiles-29340840-qbkjn\" (UID: \"66f01db6-1227-411d-8af1-df1f29e35491\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340840-qbkjn" Oct 14 14:00:00 crc kubenswrapper[4725]: I1014 14:00:00.487902 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340840-qbkjn" Oct 14 14:00:00 crc kubenswrapper[4725]: I1014 14:00:00.872090 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340840-qbkjn"] Oct 14 14:00:01 crc kubenswrapper[4725]: I1014 14:00:01.876668 4725 generic.go:334] "Generic (PLEG): container finished" podID="66f01db6-1227-411d-8af1-df1f29e35491" containerID="392fc7a5fe67a6092eb8e00682740e068a5f90de971e11d9b6ff8842d4cb59f4" exitCode=0 Oct 14 14:00:01 crc kubenswrapper[4725]: I1014 14:00:01.876791 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340840-qbkjn" event={"ID":"66f01db6-1227-411d-8af1-df1f29e35491","Type":"ContainerDied","Data":"392fc7a5fe67a6092eb8e00682740e068a5f90de971e11d9b6ff8842d4cb59f4"} Oct 14 14:00:01 crc kubenswrapper[4725]: I1014 14:00:01.877091 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340840-qbkjn" event={"ID":"66f01db6-1227-411d-8af1-df1f29e35491","Type":"ContainerStarted","Data":"e5de4302ced18fd2e4c84cd4a1d69ee0c86d9fdacb7558a43c2a9cd1a0074e39"} Oct 14 14:00:02 crc kubenswrapper[4725]: I1014 14:00:02.520954 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 14:00:02 crc kubenswrapper[4725]: I1014 14:00:02.521573 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 14:00:03 crc kubenswrapper[4725]: I1014 14:00:03.257605 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340840-qbkjn" Oct 14 14:00:03 crc kubenswrapper[4725]: I1014 14:00:03.333949 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66f01db6-1227-411d-8af1-df1f29e35491-secret-volume\") pod \"66f01db6-1227-411d-8af1-df1f29e35491\" (UID: \"66f01db6-1227-411d-8af1-df1f29e35491\") " Oct 14 14:00:03 crc kubenswrapper[4725]: I1014 14:00:03.334199 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66f01db6-1227-411d-8af1-df1f29e35491-config-volume\") pod \"66f01db6-1227-411d-8af1-df1f29e35491\" (UID: \"66f01db6-1227-411d-8af1-df1f29e35491\") " Oct 14 14:00:03 crc kubenswrapper[4725]: I1014 14:00:03.334299 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n4bd\" (UniqueName: \"kubernetes.io/projected/66f01db6-1227-411d-8af1-df1f29e35491-kube-api-access-8n4bd\") pod \"66f01db6-1227-411d-8af1-df1f29e35491\" (UID: \"66f01db6-1227-411d-8af1-df1f29e35491\") " Oct 14 14:00:03 crc kubenswrapper[4725]: I1014 14:00:03.334996 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66f01db6-1227-411d-8af1-df1f29e35491-config-volume" (OuterVolumeSpecName: "config-volume") pod "66f01db6-1227-411d-8af1-df1f29e35491" (UID: "66f01db6-1227-411d-8af1-df1f29e35491"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:00:03 crc kubenswrapper[4725]: I1014 14:00:03.340044 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66f01db6-1227-411d-8af1-df1f29e35491-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "66f01db6-1227-411d-8af1-df1f29e35491" (UID: "66f01db6-1227-411d-8af1-df1f29e35491"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:00:03 crc kubenswrapper[4725]: I1014 14:00:03.352116 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66f01db6-1227-411d-8af1-df1f29e35491-kube-api-access-8n4bd" (OuterVolumeSpecName: "kube-api-access-8n4bd") pod "66f01db6-1227-411d-8af1-df1f29e35491" (UID: "66f01db6-1227-411d-8af1-df1f29e35491"). InnerVolumeSpecName "kube-api-access-8n4bd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:00:03 crc kubenswrapper[4725]: I1014 14:00:03.436345 4725 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66f01db6-1227-411d-8af1-df1f29e35491-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 14:00:03 crc kubenswrapper[4725]: I1014 14:00:03.436392 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n4bd\" (UniqueName: \"kubernetes.io/projected/66f01db6-1227-411d-8af1-df1f29e35491-kube-api-access-8n4bd\") on node \"crc\" DevicePath \"\"" Oct 14 14:00:03 crc kubenswrapper[4725]: I1014 14:00:03.436414 4725 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66f01db6-1227-411d-8af1-df1f29e35491-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 14:00:03 crc kubenswrapper[4725]: I1014 14:00:03.902709 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340840-qbkjn" event={"ID":"66f01db6-1227-411d-8af1-df1f29e35491","Type":"ContainerDied","Data":"e5de4302ced18fd2e4c84cd4a1d69ee0c86d9fdacb7558a43c2a9cd1a0074e39"} Oct 14 14:00:03 crc kubenswrapper[4725]: I1014 14:00:03.903068 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5de4302ced18fd2e4c84cd4a1d69ee0c86d9fdacb7558a43c2a9cd1a0074e39" Oct 14 14:00:03 crc kubenswrapper[4725]: I1014 14:00:03.902776 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340840-qbkjn" Oct 14 14:00:04 crc kubenswrapper[4725]: I1014 14:00:04.327264 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340795-6q4fq"] Oct 14 14:00:04 crc kubenswrapper[4725]: I1014 14:00:04.333923 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340795-6q4fq"] Oct 14 14:00:05 crc kubenswrapper[4725]: I1014 14:00:05.937095 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4f5074-61bb-4e5b-91ce-d8149ddb50f1" path="/var/lib/kubelet/pods/9d4f5074-61bb-4e5b-91ce-d8149ddb50f1/volumes" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.168508 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Oct 14 14:00:27 crc kubenswrapper[4725]: E1014 14:00:27.169613 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66f01db6-1227-411d-8af1-df1f29e35491" containerName="collect-profiles" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.169633 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="66f01db6-1227-411d-8af1-df1f29e35491" containerName="collect-profiles" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.169902 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="66f01db6-1227-411d-8af1-df1f29e35491" containerName="collect-profiles" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.170685 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.173702 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.174047 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.174215 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.174661 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-ghhdq" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.224078 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.233840 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qc44g"] Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.236258 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qc44g" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.258476 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qc44g"] Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.354669 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/148578f5-c02c-4ef4-a214-87532b2d29e2-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " pod="openstack/tempest-tests-tempest" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.354783 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " pod="openstack/tempest-tests-tempest" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.354822 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5a2ac74-a75f-4d6f-9cc8-2b5c15726252-catalog-content\") pod \"community-operators-qc44g\" (UID: \"d5a2ac74-a75f-4d6f-9cc8-2b5c15726252\") " pod="openshift-marketplace/community-operators-qc44g" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.354862 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/148578f5-c02c-4ef4-a214-87532b2d29e2-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " pod="openstack/tempest-tests-tempest" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.354895 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/148578f5-c02c-4ef4-a214-87532b2d29e2-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " pod="openstack/tempest-tests-tempest" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.354929 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5a2ac74-a75f-4d6f-9cc8-2b5c15726252-utilities\") pod \"community-operators-qc44g\" (UID: \"d5a2ac74-a75f-4d6f-9cc8-2b5c15726252\") " pod="openshift-marketplace/community-operators-qc44g" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.355072 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwn5q\" (UniqueName: \"kubernetes.io/projected/d5a2ac74-a75f-4d6f-9cc8-2b5c15726252-kube-api-access-cwn5q\") pod \"community-operators-qc44g\" (UID: \"d5a2ac74-a75f-4d6f-9cc8-2b5c15726252\") " pod="openshift-marketplace/community-operators-qc44g" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.355122 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/148578f5-c02c-4ef4-a214-87532b2d29e2-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " pod="openstack/tempest-tests-tempest" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.355171 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/148578f5-c02c-4ef4-a214-87532b2d29e2-config-data\") pod \"tempest-tests-tempest\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " pod="openstack/tempest-tests-tempest" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.355196 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6xwv\" (UniqueName: \"kubernetes.io/projected/148578f5-c02c-4ef4-a214-87532b2d29e2-kube-api-access-f6xwv\") pod \"tempest-tests-tempest\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " pod="openstack/tempest-tests-tempest" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.355290 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/148578f5-c02c-4ef4-a214-87532b2d29e2-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " pod="openstack/tempest-tests-tempest" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.355381 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/148578f5-c02c-4ef4-a214-87532b2d29e2-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " pod="openstack/tempest-tests-tempest" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.457300 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/148578f5-c02c-4ef4-a214-87532b2d29e2-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " pod="openstack/tempest-tests-tempest" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.457394 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/148578f5-c02c-4ef4-a214-87532b2d29e2-config-data\") pod \"tempest-tests-tempest\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " pod="openstack/tempest-tests-tempest" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.457423 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6xwv\" (UniqueName: \"kubernetes.io/projected/148578f5-c02c-4ef4-a214-87532b2d29e2-kube-api-access-f6xwv\") pod \"tempest-tests-tempest\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " pod="openstack/tempest-tests-tempest" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.457463 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/148578f5-c02c-4ef4-a214-87532b2d29e2-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " pod="openstack/tempest-tests-tempest" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.457502 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/148578f5-c02c-4ef4-a214-87532b2d29e2-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " pod="openstack/tempest-tests-tempest" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.457546 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/148578f5-c02c-4ef4-a214-87532b2d29e2-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " pod="openstack/tempest-tests-tempest" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.457615 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " pod="openstack/tempest-tests-tempest" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.457641 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5a2ac74-a75f-4d6f-9cc8-2b5c15726252-catalog-content\") pod \"community-operators-qc44g\" (UID: \"d5a2ac74-a75f-4d6f-9cc8-2b5c15726252\") " pod="openshift-marketplace/community-operators-qc44g" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.457674 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/148578f5-c02c-4ef4-a214-87532b2d29e2-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " pod="openstack/tempest-tests-tempest" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.457702 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/148578f5-c02c-4ef4-a214-87532b2d29e2-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " pod="openstack/tempest-tests-tempest" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.457731 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5a2ac74-a75f-4d6f-9cc8-2b5c15726252-utilities\") pod \"community-operators-qc44g\" (UID: \"d5a2ac74-a75f-4d6f-9cc8-2b5c15726252\") " pod="openshift-marketplace/community-operators-qc44g" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.457791 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwn5q\" (UniqueName: \"kubernetes.io/projected/d5a2ac74-a75f-4d6f-9cc8-2b5c15726252-kube-api-access-cwn5q\") pod \"community-operators-qc44g\" (UID: \"d5a2ac74-a75f-4d6f-9cc8-2b5c15726252\") " pod="openshift-marketplace/community-operators-qc44g" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.458668 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/tempest-tests-tempest" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.458812 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5a2ac74-a75f-4d6f-9cc8-2b5c15726252-catalog-content\") pod \"community-operators-qc44g\" (UID: \"d5a2ac74-a75f-4d6f-9cc8-2b5c15726252\") " pod="openshift-marketplace/community-operators-qc44g" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.458959 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5a2ac74-a75f-4d6f-9cc8-2b5c15726252-utilities\") pod \"community-operators-qc44g\" (UID: \"d5a2ac74-a75f-4d6f-9cc8-2b5c15726252\") " pod="openshift-marketplace/community-operators-qc44g" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.459102 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/148578f5-c02c-4ef4-a214-87532b2d29e2-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " pod="openstack/tempest-tests-tempest" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.458675 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/148578f5-c02c-4ef4-a214-87532b2d29e2-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " pod="openstack/tempest-tests-tempest" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.459337 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/148578f5-c02c-4ef4-a214-87532b2d29e2-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " pod="openstack/tempest-tests-tempest" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.459364 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/148578f5-c02c-4ef4-a214-87532b2d29e2-config-data\") pod \"tempest-tests-tempest\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " pod="openstack/tempest-tests-tempest" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.469771 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/148578f5-c02c-4ef4-a214-87532b2d29e2-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " pod="openstack/tempest-tests-tempest" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.471041 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/148578f5-c02c-4ef4-a214-87532b2d29e2-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " pod="openstack/tempest-tests-tempest" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.476206 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/148578f5-c02c-4ef4-a214-87532b2d29e2-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " pod="openstack/tempest-tests-tempest" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.486401 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6xwv\" (UniqueName: \"kubernetes.io/projected/148578f5-c02c-4ef4-a214-87532b2d29e2-kube-api-access-f6xwv\") pod \"tempest-tests-tempest\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " pod="openstack/tempest-tests-tempest" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.496659 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwn5q\" (UniqueName: \"kubernetes.io/projected/d5a2ac74-a75f-4d6f-9cc8-2b5c15726252-kube-api-access-cwn5q\") pod \"community-operators-qc44g\" (UID: \"d5a2ac74-a75f-4d6f-9cc8-2b5c15726252\") " pod="openshift-marketplace/community-operators-qc44g" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.519745 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " pod="openstack/tempest-tests-tempest" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.562733 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qc44g" Oct 14 14:00:27 crc kubenswrapper[4725]: I1014 14:00:27.822031 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 14 14:00:28 crc kubenswrapper[4725]: I1014 14:00:28.155124 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qc44g"] Oct 14 14:00:28 crc kubenswrapper[4725]: I1014 14:00:28.326134 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Oct 14 14:00:28 crc kubenswrapper[4725]: W1014 14:00:28.349714 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod148578f5_c02c_4ef4_a214_87532b2d29e2.slice/crio-eeccb0cd60a7f45815cf1e7d03ece365809e26f81cd93ec542fb0616312f6ff5 WatchSource:0}: Error finding container eeccb0cd60a7f45815cf1e7d03ece365809e26f81cd93ec542fb0616312f6ff5: Status 404 returned error can't find the container with id eeccb0cd60a7f45815cf1e7d03ece365809e26f81cd93ec542fb0616312f6ff5 Oct 14 14:00:29 crc kubenswrapper[4725]: I1014 14:00:29.132401 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"148578f5-c02c-4ef4-a214-87532b2d29e2","Type":"ContainerStarted","Data":"eeccb0cd60a7f45815cf1e7d03ece365809e26f81cd93ec542fb0616312f6ff5"} Oct 14 14:00:29 crc kubenswrapper[4725]: I1014 14:00:29.136928 4725 generic.go:334] "Generic (PLEG): container finished" podID="d5a2ac74-a75f-4d6f-9cc8-2b5c15726252" containerID="3c50e34c8b4030f613c62da908eaa6cc6c84e7c4dddb0cfa2167bfdbf09e1b2c" exitCode=0 Oct 14 14:00:29 crc kubenswrapper[4725]: I1014 14:00:29.137002 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qc44g" event={"ID":"d5a2ac74-a75f-4d6f-9cc8-2b5c15726252","Type":"ContainerDied","Data":"3c50e34c8b4030f613c62da908eaa6cc6c84e7c4dddb0cfa2167bfdbf09e1b2c"} Oct 14 14:00:29 crc kubenswrapper[4725]: I1014 14:00:29.137085 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qc44g" event={"ID":"d5a2ac74-a75f-4d6f-9cc8-2b5c15726252","Type":"ContainerStarted","Data":"b125f6ce9411f6aa495f5968177fb52a6e9b0e6767273444d31ad6d560a7a898"} Oct 14 14:00:31 crc kubenswrapper[4725]: I1014 14:00:31.154320 4725 generic.go:334] "Generic (PLEG): container finished" podID="d5a2ac74-a75f-4d6f-9cc8-2b5c15726252" containerID="2b0ec0557365ce3e263f24509c293ca16fdde97030329ce885dd3d2c0082127a" exitCode=0 Oct 14 14:00:31 crc kubenswrapper[4725]: I1014 14:00:31.154508 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qc44g" event={"ID":"d5a2ac74-a75f-4d6f-9cc8-2b5c15726252","Type":"ContainerDied","Data":"2b0ec0557365ce3e263f24509c293ca16fdde97030329ce885dd3d2c0082127a"} Oct 14 14:00:32 crc kubenswrapper[4725]: I1014 14:00:32.520739 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 14:00:32 crc kubenswrapper[4725]: I1014 14:00:32.521106 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 14:00:36 crc kubenswrapper[4725]: I1014 14:00:36.223072 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qc44g" event={"ID":"d5a2ac74-a75f-4d6f-9cc8-2b5c15726252","Type":"ContainerStarted","Data":"2a59190f782562459ba89e1237fe928f6bc71486e8a3bbb9026f3d1c004e904a"} Oct 14 14:00:36 crc kubenswrapper[4725]: I1014 14:00:36.249369 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qc44g" podStartSLOduration=3.184389008 podStartE2EDuration="9.249342341s" podCreationTimestamp="2025-10-14 14:00:27 +0000 UTC" firstStartedPulling="2025-10-14 14:00:29.1405365 +0000 UTC m=+2745.988971359" lastFinishedPulling="2025-10-14 14:00:35.205489873 +0000 UTC m=+2752.053924692" observedRunningTime="2025-10-14 14:00:36.242839165 +0000 UTC m=+2753.091274034" watchObservedRunningTime="2025-10-14 14:00:36.249342341 +0000 UTC m=+2753.097777180" Oct 14 14:00:37 crc kubenswrapper[4725]: I1014 14:00:37.564346 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qc44g" Oct 14 14:00:37 crc kubenswrapper[4725]: I1014 14:00:37.564683 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qc44g" Oct 14 14:00:37 crc kubenswrapper[4725]: I1014 14:00:37.651889 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qc44g" Oct 14 14:00:47 crc kubenswrapper[4725]: I1014 14:00:47.629189 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qc44g" Oct 14 14:00:47 crc kubenswrapper[4725]: I1014 14:00:47.674046 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qc44g"] Oct 14 14:00:48 crc kubenswrapper[4725]: I1014 14:00:48.347857 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qc44g" podUID="d5a2ac74-a75f-4d6f-9cc8-2b5c15726252" containerName="registry-server" containerID="cri-o://2a59190f782562459ba89e1237fe928f6bc71486e8a3bbb9026f3d1c004e904a" gracePeriod=2 Oct 14 14:00:48 crc kubenswrapper[4725]: I1014 14:00:48.521279 4725 scope.go:117] "RemoveContainer" containerID="181f5706bdea2990a91001901bbec7f8d3461b974f9ffee1adfde16ddb54a858" Oct 14 14:00:49 crc kubenswrapper[4725]: I1014 14:00:49.359644 4725 generic.go:334] "Generic (PLEG): container finished" podID="d5a2ac74-a75f-4d6f-9cc8-2b5c15726252" containerID="2a59190f782562459ba89e1237fe928f6bc71486e8a3bbb9026f3d1c004e904a" exitCode=0 Oct 14 14:00:49 crc kubenswrapper[4725]: I1014 14:00:49.359691 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qc44g" event={"ID":"d5a2ac74-a75f-4d6f-9cc8-2b5c15726252","Type":"ContainerDied","Data":"2a59190f782562459ba89e1237fe928f6bc71486e8a3bbb9026f3d1c004e904a"} Oct 14 14:00:57 crc kubenswrapper[4725]: E1014 14:00:57.576481 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a59190f782562459ba89e1237fe928f6bc71486e8a3bbb9026f3d1c004e904a is running failed: container process not found" containerID="2a59190f782562459ba89e1237fe928f6bc71486e8a3bbb9026f3d1c004e904a" cmd=["grpc_health_probe","-addr=:50051"] Oct 14 14:00:57 crc kubenswrapper[4725]: E1014 14:00:57.577500 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a59190f782562459ba89e1237fe928f6bc71486e8a3bbb9026f3d1c004e904a is running failed: container process not found" containerID="2a59190f782562459ba89e1237fe928f6bc71486e8a3bbb9026f3d1c004e904a" cmd=["grpc_health_probe","-addr=:50051"] Oct 14 14:00:57 crc kubenswrapper[4725]: E1014 14:00:57.579218 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a59190f782562459ba89e1237fe928f6bc71486e8a3bbb9026f3d1c004e904a is running failed: container process not found" containerID="2a59190f782562459ba89e1237fe928f6bc71486e8a3bbb9026f3d1c004e904a" cmd=["grpc_health_probe","-addr=:50051"] Oct 14 14:00:57 crc kubenswrapper[4725]: E1014 14:00:57.579309 4725 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2a59190f782562459ba89e1237fe928f6bc71486e8a3bbb9026f3d1c004e904a is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-qc44g" podUID="d5a2ac74-a75f-4d6f-9cc8-2b5c15726252" containerName="registry-server" Oct 14 14:00:59 crc kubenswrapper[4725]: E1014 14:00:59.117025 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Oct 14 14:00:59 crc kubenswrapper[4725]: E1014 14:00:59.117863 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f6xwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(148578f5-c02c-4ef4-a214-87532b2d29e2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 14 14:00:59 crc kubenswrapper[4725]: E1014 14:00:59.119209 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="148578f5-c02c-4ef4-a214-87532b2d29e2" Oct 14 14:00:59 crc kubenswrapper[4725]: I1014 14:00:59.391067 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qc44g" Oct 14 14:00:59 crc kubenswrapper[4725]: I1014 14:00:59.495383 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qc44g" Oct 14 14:00:59 crc kubenswrapper[4725]: I1014 14:00:59.495376 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qc44g" event={"ID":"d5a2ac74-a75f-4d6f-9cc8-2b5c15726252","Type":"ContainerDied","Data":"b125f6ce9411f6aa495f5968177fb52a6e9b0e6767273444d31ad6d560a7a898"} Oct 14 14:00:59 crc kubenswrapper[4725]: I1014 14:00:59.495544 4725 scope.go:117] "RemoveContainer" containerID="2a59190f782562459ba89e1237fe928f6bc71486e8a3bbb9026f3d1c004e904a" Oct 14 14:00:59 crc kubenswrapper[4725]: E1014 14:00:59.496705 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="148578f5-c02c-4ef4-a214-87532b2d29e2" Oct 14 14:00:59 crc kubenswrapper[4725]: I1014 14:00:59.523090 4725 scope.go:117] "RemoveContainer" containerID="2b0ec0557365ce3e263f24509c293ca16fdde97030329ce885dd3d2c0082127a" Oct 14 14:00:59 crc kubenswrapper[4725]: I1014 14:00:59.552850 4725 scope.go:117] "RemoveContainer" containerID="3c50e34c8b4030f613c62da908eaa6cc6c84e7c4dddb0cfa2167bfdbf09e1b2c" Oct 14 14:00:59 crc kubenswrapper[4725]: I1014 14:00:59.568170 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5a2ac74-a75f-4d6f-9cc8-2b5c15726252-catalog-content\") pod \"d5a2ac74-a75f-4d6f-9cc8-2b5c15726252\" (UID: \"d5a2ac74-a75f-4d6f-9cc8-2b5c15726252\") " Oct 14 14:00:59 crc kubenswrapper[4725]: I1014 14:00:59.568247 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5a2ac74-a75f-4d6f-9cc8-2b5c15726252-utilities\") pod \"d5a2ac74-a75f-4d6f-9cc8-2b5c15726252\" (UID: \"d5a2ac74-a75f-4d6f-9cc8-2b5c15726252\") " Oct 14 14:00:59 crc kubenswrapper[4725]: I1014 14:00:59.568343 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwn5q\" (UniqueName: \"kubernetes.io/projected/d5a2ac74-a75f-4d6f-9cc8-2b5c15726252-kube-api-access-cwn5q\") pod \"d5a2ac74-a75f-4d6f-9cc8-2b5c15726252\" (UID: \"d5a2ac74-a75f-4d6f-9cc8-2b5c15726252\") " Oct 14 14:00:59 crc kubenswrapper[4725]: I1014 14:00:59.569191 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5a2ac74-a75f-4d6f-9cc8-2b5c15726252-utilities" (OuterVolumeSpecName: "utilities") pod "d5a2ac74-a75f-4d6f-9cc8-2b5c15726252" (UID: "d5a2ac74-a75f-4d6f-9cc8-2b5c15726252"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:00:59 crc kubenswrapper[4725]: I1014 14:00:59.574946 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5a2ac74-a75f-4d6f-9cc8-2b5c15726252-kube-api-access-cwn5q" (OuterVolumeSpecName: "kube-api-access-cwn5q") pod "d5a2ac74-a75f-4d6f-9cc8-2b5c15726252" (UID: "d5a2ac74-a75f-4d6f-9cc8-2b5c15726252"). InnerVolumeSpecName "kube-api-access-cwn5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:00:59 crc kubenswrapper[4725]: I1014 14:00:59.625224 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5a2ac74-a75f-4d6f-9cc8-2b5c15726252-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5a2ac74-a75f-4d6f-9cc8-2b5c15726252" (UID: "d5a2ac74-a75f-4d6f-9cc8-2b5c15726252"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:00:59 crc kubenswrapper[4725]: I1014 14:00:59.671337 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5a2ac74-a75f-4d6f-9cc8-2b5c15726252-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 14:00:59 crc kubenswrapper[4725]: I1014 14:00:59.671381 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5a2ac74-a75f-4d6f-9cc8-2b5c15726252-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 14:00:59 crc kubenswrapper[4725]: I1014 14:00:59.671400 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwn5q\" (UniqueName: \"kubernetes.io/projected/d5a2ac74-a75f-4d6f-9cc8-2b5c15726252-kube-api-access-cwn5q\") on node \"crc\" DevicePath \"\"" Oct 14 14:00:59 crc kubenswrapper[4725]: I1014 14:00:59.835573 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qc44g"] Oct 14 14:00:59 crc kubenswrapper[4725]: I1014 14:00:59.844440 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qc44g"] Oct 14 14:00:59 crc kubenswrapper[4725]: I1014 14:00:59.932879 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5a2ac74-a75f-4d6f-9cc8-2b5c15726252" path="/var/lib/kubelet/pods/d5a2ac74-a75f-4d6f-9cc8-2b5c15726252/volumes" Oct 14 14:01:00 crc kubenswrapper[4725]: I1014 14:01:00.162010 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29340841-nk2gk"] Oct 14 14:01:00 crc kubenswrapper[4725]: E1014 14:01:00.162566 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5a2ac74-a75f-4d6f-9cc8-2b5c15726252" containerName="registry-server" Oct 14 14:01:00 crc kubenswrapper[4725]: I1014 14:01:00.162585 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5a2ac74-a75f-4d6f-9cc8-2b5c15726252" containerName="registry-server" Oct 14 14:01:00 crc kubenswrapper[4725]: E1014 14:01:00.162606 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5a2ac74-a75f-4d6f-9cc8-2b5c15726252" containerName="extract-utilities" Oct 14 14:01:00 crc kubenswrapper[4725]: I1014 14:01:00.162615 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5a2ac74-a75f-4d6f-9cc8-2b5c15726252" containerName="extract-utilities" Oct 14 14:01:00 crc kubenswrapper[4725]: E1014 14:01:00.162626 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5a2ac74-a75f-4d6f-9cc8-2b5c15726252" containerName="extract-content" Oct 14 14:01:00 crc kubenswrapper[4725]: I1014 14:01:00.162643 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5a2ac74-a75f-4d6f-9cc8-2b5c15726252" containerName="extract-content" Oct 14 14:01:00 crc kubenswrapper[4725]: I1014 14:01:00.162878 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5a2ac74-a75f-4d6f-9cc8-2b5c15726252" containerName="registry-server" Oct 14 14:01:00 crc kubenswrapper[4725]: I1014 14:01:00.163745 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29340841-nk2gk" Oct 14 14:01:00 crc kubenswrapper[4725]: I1014 14:01:00.175713 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29340841-nk2gk"] Oct 14 14:01:00 crc kubenswrapper[4725]: I1014 14:01:00.284128 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db5e329b-6d98-4421-a995-f34e57421846-config-data\") pod \"keystone-cron-29340841-nk2gk\" (UID: \"db5e329b-6d98-4421-a995-f34e57421846\") " pod="openstack/keystone-cron-29340841-nk2gk" Oct 14 14:01:00 crc kubenswrapper[4725]: I1014 14:01:00.284439 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx9f6\" (UniqueName: \"kubernetes.io/projected/db5e329b-6d98-4421-a995-f34e57421846-kube-api-access-fx9f6\") pod \"keystone-cron-29340841-nk2gk\" (UID: \"db5e329b-6d98-4421-a995-f34e57421846\") " pod="openstack/keystone-cron-29340841-nk2gk" Oct 14 14:01:00 crc kubenswrapper[4725]: I1014 14:01:00.284677 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db5e329b-6d98-4421-a995-f34e57421846-fernet-keys\") pod \"keystone-cron-29340841-nk2gk\" (UID: \"db5e329b-6d98-4421-a995-f34e57421846\") " pod="openstack/keystone-cron-29340841-nk2gk" Oct 14 14:01:00 crc kubenswrapper[4725]: I1014 14:01:00.284817 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db5e329b-6d98-4421-a995-f34e57421846-combined-ca-bundle\") pod \"keystone-cron-29340841-nk2gk\" (UID: \"db5e329b-6d98-4421-a995-f34e57421846\") " pod="openstack/keystone-cron-29340841-nk2gk" Oct 14 14:01:00 crc kubenswrapper[4725]: I1014 14:01:00.386655 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db5e329b-6d98-4421-a995-f34e57421846-config-data\") pod \"keystone-cron-29340841-nk2gk\" (UID: \"db5e329b-6d98-4421-a995-f34e57421846\") " pod="openstack/keystone-cron-29340841-nk2gk" Oct 14 14:01:00 crc kubenswrapper[4725]: I1014 14:01:00.386723 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx9f6\" (UniqueName: \"kubernetes.io/projected/db5e329b-6d98-4421-a995-f34e57421846-kube-api-access-fx9f6\") pod \"keystone-cron-29340841-nk2gk\" (UID: \"db5e329b-6d98-4421-a995-f34e57421846\") " pod="openstack/keystone-cron-29340841-nk2gk" Oct 14 14:01:00 crc kubenswrapper[4725]: I1014 14:01:00.386770 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db5e329b-6d98-4421-a995-f34e57421846-fernet-keys\") pod \"keystone-cron-29340841-nk2gk\" (UID: \"db5e329b-6d98-4421-a995-f34e57421846\") " pod="openstack/keystone-cron-29340841-nk2gk" Oct 14 14:01:00 crc kubenswrapper[4725]: I1014 14:01:00.386805 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db5e329b-6d98-4421-a995-f34e57421846-combined-ca-bundle\") pod \"keystone-cron-29340841-nk2gk\" (UID: \"db5e329b-6d98-4421-a995-f34e57421846\") " pod="openstack/keystone-cron-29340841-nk2gk" Oct 14 14:01:00 crc kubenswrapper[4725]: I1014 14:01:00.392730 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db5e329b-6d98-4421-a995-f34e57421846-fernet-keys\") pod \"keystone-cron-29340841-nk2gk\" (UID: \"db5e329b-6d98-4421-a995-f34e57421846\") " pod="openstack/keystone-cron-29340841-nk2gk" Oct 14 14:01:00 crc kubenswrapper[4725]: I1014 14:01:00.393435 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db5e329b-6d98-4421-a995-f34e57421846-config-data\") pod \"keystone-cron-29340841-nk2gk\" (UID: \"db5e329b-6d98-4421-a995-f34e57421846\") " pod="openstack/keystone-cron-29340841-nk2gk" Oct 14 14:01:00 crc kubenswrapper[4725]: I1014 14:01:00.393773 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db5e329b-6d98-4421-a995-f34e57421846-combined-ca-bundle\") pod \"keystone-cron-29340841-nk2gk\" (UID: \"db5e329b-6d98-4421-a995-f34e57421846\") " pod="openstack/keystone-cron-29340841-nk2gk" Oct 14 14:01:00 crc kubenswrapper[4725]: I1014 14:01:00.406663 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx9f6\" (UniqueName: \"kubernetes.io/projected/db5e329b-6d98-4421-a995-f34e57421846-kube-api-access-fx9f6\") pod \"keystone-cron-29340841-nk2gk\" (UID: \"db5e329b-6d98-4421-a995-f34e57421846\") " pod="openstack/keystone-cron-29340841-nk2gk" Oct 14 14:01:00 crc kubenswrapper[4725]: I1014 14:01:00.494625 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29340841-nk2gk" Oct 14 14:01:00 crc kubenswrapper[4725]: I1014 14:01:00.956871 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29340841-nk2gk"] Oct 14 14:01:01 crc kubenswrapper[4725]: I1014 14:01:01.518760 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29340841-nk2gk" event={"ID":"db5e329b-6d98-4421-a995-f34e57421846","Type":"ContainerStarted","Data":"ce0f7070263bf9d36c3b6f723485b1d1b82f0e59c9c28922b8f939627843be31"} Oct 14 14:01:01 crc kubenswrapper[4725]: I1014 14:01:01.519196 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29340841-nk2gk" event={"ID":"db5e329b-6d98-4421-a995-f34e57421846","Type":"ContainerStarted","Data":"8d2e26a2a9316fe62633224c51117de7e4f37c8cd5771703dec5fc5822aa30c5"} Oct 14 14:01:01 crc kubenswrapper[4725]: I1014 14:01:01.545929 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29340841-nk2gk" podStartSLOduration=1.54590664 podStartE2EDuration="1.54590664s" podCreationTimestamp="2025-10-14 14:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:01:01.537927095 +0000 UTC m=+2778.386362004" watchObservedRunningTime="2025-10-14 14:01:01.54590664 +0000 UTC m=+2778.394341449" Oct 14 14:01:02 crc kubenswrapper[4725]: I1014 14:01:02.521049 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 14:01:02 crc kubenswrapper[4725]: I1014 14:01:02.521414 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 14:01:02 crc kubenswrapper[4725]: I1014 14:01:02.521511 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" Oct 14 14:01:02 crc kubenswrapper[4725]: I1014 14:01:02.522614 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a392a4202d6674157990a617c0cb2e4f29e471b98c57f4957a82f7d89a3e7d41"} pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 14:01:02 crc kubenswrapper[4725]: I1014 14:01:02.522700 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" containerID="cri-o://a392a4202d6674157990a617c0cb2e4f29e471b98c57f4957a82f7d89a3e7d41" gracePeriod=600 Oct 14 14:01:03 crc kubenswrapper[4725]: I1014 14:01:03.548859 4725 generic.go:334] "Generic (PLEG): container finished" podID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerID="a392a4202d6674157990a617c0cb2e4f29e471b98c57f4957a82f7d89a3e7d41" exitCode=0 Oct 14 14:01:03 crc kubenswrapper[4725]: I1014 14:01:03.548953 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" event={"ID":"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c","Type":"ContainerDied","Data":"a392a4202d6674157990a617c0cb2e4f29e471b98c57f4957a82f7d89a3e7d41"} Oct 14 14:01:03 crc kubenswrapper[4725]: I1014 14:01:03.549445 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" event={"ID":"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c","Type":"ContainerStarted","Data":"2be0c39729f010a9af8428f8e57dd6b737cbe9299288deecdc98318e8f6194f2"} Oct 14 14:01:03 crc kubenswrapper[4725]: I1014 14:01:03.549523 4725 scope.go:117] "RemoveContainer" containerID="970512403f3a724310c7fbfce185ff25273c496de6bf6e9d555198ed6e64e7dd" Oct 14 14:01:03 crc kubenswrapper[4725]: I1014 14:01:03.551989 4725 generic.go:334] "Generic (PLEG): container finished" podID="db5e329b-6d98-4421-a995-f34e57421846" containerID="ce0f7070263bf9d36c3b6f723485b1d1b82f0e59c9c28922b8f939627843be31" exitCode=0 Oct 14 14:01:03 crc kubenswrapper[4725]: I1014 14:01:03.552033 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29340841-nk2gk" event={"ID":"db5e329b-6d98-4421-a995-f34e57421846","Type":"ContainerDied","Data":"ce0f7070263bf9d36c3b6f723485b1d1b82f0e59c9c28922b8f939627843be31"} Oct 14 14:01:04 crc kubenswrapper[4725]: I1014 14:01:04.928756 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29340841-nk2gk" Oct 14 14:01:04 crc kubenswrapper[4725]: I1014 14:01:04.982390 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db5e329b-6d98-4421-a995-f34e57421846-fernet-keys\") pod \"db5e329b-6d98-4421-a995-f34e57421846\" (UID: \"db5e329b-6d98-4421-a995-f34e57421846\") " Oct 14 14:01:04 crc kubenswrapper[4725]: I1014 14:01:04.982444 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db5e329b-6d98-4421-a995-f34e57421846-config-data\") pod \"db5e329b-6d98-4421-a995-f34e57421846\" (UID: \"db5e329b-6d98-4421-a995-f34e57421846\") " Oct 14 14:01:04 crc kubenswrapper[4725]: I1014 14:01:04.982522 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx9f6\" (UniqueName: \"kubernetes.io/projected/db5e329b-6d98-4421-a995-f34e57421846-kube-api-access-fx9f6\") pod \"db5e329b-6d98-4421-a995-f34e57421846\" (UID: \"db5e329b-6d98-4421-a995-f34e57421846\") " Oct 14 14:01:04 crc kubenswrapper[4725]: I1014 14:01:04.982548 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db5e329b-6d98-4421-a995-f34e57421846-combined-ca-bundle\") pod \"db5e329b-6d98-4421-a995-f34e57421846\" (UID: \"db5e329b-6d98-4421-a995-f34e57421846\") " Oct 14 14:01:04 crc kubenswrapper[4725]: I1014 14:01:04.987923 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db5e329b-6d98-4421-a995-f34e57421846-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "db5e329b-6d98-4421-a995-f34e57421846" (UID: "db5e329b-6d98-4421-a995-f34e57421846"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:01:05 crc kubenswrapper[4725]: I1014 14:01:05.003348 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db5e329b-6d98-4421-a995-f34e57421846-kube-api-access-fx9f6" (OuterVolumeSpecName: "kube-api-access-fx9f6") pod "db5e329b-6d98-4421-a995-f34e57421846" (UID: "db5e329b-6d98-4421-a995-f34e57421846"). InnerVolumeSpecName "kube-api-access-fx9f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:01:05 crc kubenswrapper[4725]: I1014 14:01:05.009973 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db5e329b-6d98-4421-a995-f34e57421846-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db5e329b-6d98-4421-a995-f34e57421846" (UID: "db5e329b-6d98-4421-a995-f34e57421846"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:01:05 crc kubenswrapper[4725]: I1014 14:01:05.035884 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db5e329b-6d98-4421-a995-f34e57421846-config-data" (OuterVolumeSpecName: "config-data") pod "db5e329b-6d98-4421-a995-f34e57421846" (UID: "db5e329b-6d98-4421-a995-f34e57421846"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:01:05 crc kubenswrapper[4725]: I1014 14:01:05.085175 4725 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db5e329b-6d98-4421-a995-f34e57421846-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 14 14:01:05 crc kubenswrapper[4725]: I1014 14:01:05.085212 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db5e329b-6d98-4421-a995-f34e57421846-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 14:01:05 crc kubenswrapper[4725]: I1014 14:01:05.085261 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx9f6\" (UniqueName: \"kubernetes.io/projected/db5e329b-6d98-4421-a995-f34e57421846-kube-api-access-fx9f6\") on node \"crc\" DevicePath \"\"" Oct 14 14:01:05 crc kubenswrapper[4725]: I1014 14:01:05.085281 4725 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db5e329b-6d98-4421-a995-f34e57421846-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 14 14:01:05 crc kubenswrapper[4725]: I1014 14:01:05.577216 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29340841-nk2gk" event={"ID":"db5e329b-6d98-4421-a995-f34e57421846","Type":"ContainerDied","Data":"8d2e26a2a9316fe62633224c51117de7e4f37c8cd5771703dec5fc5822aa30c5"} Oct 14 14:01:05 crc kubenswrapper[4725]: I1014 14:01:05.577781 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d2e26a2a9316fe62633224c51117de7e4f37c8cd5771703dec5fc5822aa30c5" Oct 14 14:01:05 crc kubenswrapper[4725]: I1014 14:01:05.577388 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29340841-nk2gk" Oct 14 14:01:12 crc kubenswrapper[4725]: I1014 14:01:12.523334 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Oct 14 14:01:13 crc kubenswrapper[4725]: I1014 14:01:13.691647 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"148578f5-c02c-4ef4-a214-87532b2d29e2","Type":"ContainerStarted","Data":"b373f96046d1c755bd77cb78a7dcc0b85b9695d381bcf0c679c51fef0efe67a8"} Oct 14 14:01:13 crc kubenswrapper[4725]: I1014 14:01:13.720585 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.551948231 podStartE2EDuration="47.720566069s" podCreationTimestamp="2025-10-14 14:00:26 +0000 UTC" firstStartedPulling="2025-10-14 14:00:28.351772207 +0000 UTC m=+2745.200207056" lastFinishedPulling="2025-10-14 14:01:12.520390075 +0000 UTC m=+2789.368824894" observedRunningTime="2025-10-14 14:01:13.712754739 +0000 UTC m=+2790.561189548" watchObservedRunningTime="2025-10-14 14:01:13.720566069 +0000 UTC m=+2790.569000878" Oct 14 14:03:02 crc kubenswrapper[4725]: I1014 14:03:02.520237 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 14:03:02 crc kubenswrapper[4725]: I1014 14:03:02.520810 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 14:03:32 crc kubenswrapper[4725]: I1014 14:03:32.520366 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 14:03:32 crc kubenswrapper[4725]: I1014 14:03:32.521365 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 14:04:02 crc kubenswrapper[4725]: I1014 14:04:02.520745 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 14:04:02 crc kubenswrapper[4725]: I1014 14:04:02.521150 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 14:04:02 crc kubenswrapper[4725]: I1014 14:04:02.521186 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" Oct 14 14:04:02 crc kubenswrapper[4725]: I1014 14:04:02.521850 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2be0c39729f010a9af8428f8e57dd6b737cbe9299288deecdc98318e8f6194f2"} pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 14:04:02 crc kubenswrapper[4725]: I1014 14:04:02.521899 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" containerID="cri-o://2be0c39729f010a9af8428f8e57dd6b737cbe9299288deecdc98318e8f6194f2" gracePeriod=600 Oct 14 14:04:02 crc kubenswrapper[4725]: E1014 14:04:02.642034 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:04:03 crc kubenswrapper[4725]: I1014 14:04:03.346562 4725 generic.go:334] "Generic (PLEG): container finished" podID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerID="2be0c39729f010a9af8428f8e57dd6b737cbe9299288deecdc98318e8f6194f2" exitCode=0 Oct 14 14:04:03 crc kubenswrapper[4725]: I1014 14:04:03.346613 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" event={"ID":"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c","Type":"ContainerDied","Data":"2be0c39729f010a9af8428f8e57dd6b737cbe9299288deecdc98318e8f6194f2"} Oct 14 14:04:03 crc kubenswrapper[4725]: I1014 14:04:03.346645 4725 scope.go:117] "RemoveContainer" containerID="a392a4202d6674157990a617c0cb2e4f29e471b98c57f4957a82f7d89a3e7d41" Oct 14 14:04:03 crc kubenswrapper[4725]: I1014 14:04:03.347427 4725 scope.go:117] "RemoveContainer" containerID="2be0c39729f010a9af8428f8e57dd6b737cbe9299288deecdc98318e8f6194f2" Oct 14 14:04:03 crc kubenswrapper[4725]: E1014 14:04:03.347854 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:04:16 crc kubenswrapper[4725]: I1014 14:04:16.921569 4725 scope.go:117] "RemoveContainer" containerID="2be0c39729f010a9af8428f8e57dd6b737cbe9299288deecdc98318e8f6194f2" Oct 14 14:04:16 crc kubenswrapper[4725]: E1014 14:04:16.922203 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:04:30 crc kubenswrapper[4725]: I1014 14:04:30.921650 4725 scope.go:117] "RemoveContainer" containerID="2be0c39729f010a9af8428f8e57dd6b737cbe9299288deecdc98318e8f6194f2" Oct 14 14:04:30 crc kubenswrapper[4725]: E1014 14:04:30.922671 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:04:44 crc kubenswrapper[4725]: I1014 14:04:44.921924 4725 scope.go:117] "RemoveContainer" containerID="2be0c39729f010a9af8428f8e57dd6b737cbe9299288deecdc98318e8f6194f2" Oct 14 14:04:44 crc kubenswrapper[4725]: E1014 14:04:44.922537 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:04:55 crc kubenswrapper[4725]: I1014 14:04:55.922038 4725 scope.go:117] "RemoveContainer" containerID="2be0c39729f010a9af8428f8e57dd6b737cbe9299288deecdc98318e8f6194f2" Oct 14 14:04:55 crc kubenswrapper[4725]: E1014 14:04:55.922894 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:05:08 crc kubenswrapper[4725]: I1014 14:05:08.921075 4725 scope.go:117] "RemoveContainer" containerID="2be0c39729f010a9af8428f8e57dd6b737cbe9299288deecdc98318e8f6194f2" Oct 14 14:05:08 crc kubenswrapper[4725]: E1014 14:05:08.921995 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:05:22 crc kubenswrapper[4725]: I1014 14:05:22.921990 4725 scope.go:117] "RemoveContainer" containerID="2be0c39729f010a9af8428f8e57dd6b737cbe9299288deecdc98318e8f6194f2" Oct 14 14:05:22 crc kubenswrapper[4725]: E1014 14:05:22.922733 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:05:37 crc kubenswrapper[4725]: I1014 14:05:37.921066 4725 scope.go:117] "RemoveContainer" containerID="2be0c39729f010a9af8428f8e57dd6b737cbe9299288deecdc98318e8f6194f2" Oct 14 14:05:37 crc kubenswrapper[4725]: E1014 14:05:37.922128 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:05:49 crc kubenswrapper[4725]: I1014 14:05:49.812627 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6dj89"] Oct 14 14:05:49 crc kubenswrapper[4725]: E1014 14:05:49.814018 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db5e329b-6d98-4421-a995-f34e57421846" containerName="keystone-cron" Oct 14 14:05:49 crc kubenswrapper[4725]: I1014 14:05:49.814034 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="db5e329b-6d98-4421-a995-f34e57421846" containerName="keystone-cron" Oct 14 14:05:49 crc kubenswrapper[4725]: I1014 14:05:49.816685 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="db5e329b-6d98-4421-a995-f34e57421846" containerName="keystone-cron" Oct 14 14:05:49 crc kubenswrapper[4725]: I1014 14:05:49.818048 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6dj89" Oct 14 14:05:49 crc kubenswrapper[4725]: I1014 14:05:49.829643 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6dj89"] Oct 14 14:05:50 crc kubenswrapper[4725]: I1014 14:05:50.011995 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82341669-ddec-461a-bd2f-b1fd37d04ab3-catalog-content\") pod \"redhat-operators-6dj89\" (UID: \"82341669-ddec-461a-bd2f-b1fd37d04ab3\") " pod="openshift-marketplace/redhat-operators-6dj89" Oct 14 14:05:50 crc kubenswrapper[4725]: I1014 14:05:50.012068 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82341669-ddec-461a-bd2f-b1fd37d04ab3-utilities\") pod \"redhat-operators-6dj89\" (UID: \"82341669-ddec-461a-bd2f-b1fd37d04ab3\") " pod="openshift-marketplace/redhat-operators-6dj89" Oct 14 14:05:50 crc kubenswrapper[4725]: I1014 14:05:50.012114 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88jnm\" (UniqueName: \"kubernetes.io/projected/82341669-ddec-461a-bd2f-b1fd37d04ab3-kube-api-access-88jnm\") pod \"redhat-operators-6dj89\" (UID: \"82341669-ddec-461a-bd2f-b1fd37d04ab3\") " pod="openshift-marketplace/redhat-operators-6dj89" Oct 14 14:05:50 crc kubenswrapper[4725]: I1014 14:05:50.114049 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82341669-ddec-461a-bd2f-b1fd37d04ab3-catalog-content\") pod \"redhat-operators-6dj89\" (UID: \"82341669-ddec-461a-bd2f-b1fd37d04ab3\") " pod="openshift-marketplace/redhat-operators-6dj89" Oct 14 14:05:50 crc kubenswrapper[4725]: I1014 14:05:50.114113 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82341669-ddec-461a-bd2f-b1fd37d04ab3-utilities\") pod \"redhat-operators-6dj89\" (UID: \"82341669-ddec-461a-bd2f-b1fd37d04ab3\") " pod="openshift-marketplace/redhat-operators-6dj89" Oct 14 14:05:50 crc kubenswrapper[4725]: I1014 14:05:50.114151 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88jnm\" (UniqueName: \"kubernetes.io/projected/82341669-ddec-461a-bd2f-b1fd37d04ab3-kube-api-access-88jnm\") pod \"redhat-operators-6dj89\" (UID: \"82341669-ddec-461a-bd2f-b1fd37d04ab3\") " pod="openshift-marketplace/redhat-operators-6dj89" Oct 14 14:05:50 crc kubenswrapper[4725]: I1014 14:05:50.114767 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82341669-ddec-461a-bd2f-b1fd37d04ab3-catalog-content\") pod \"redhat-operators-6dj89\" (UID: \"82341669-ddec-461a-bd2f-b1fd37d04ab3\") " pod="openshift-marketplace/redhat-operators-6dj89" Oct 14 14:05:50 crc kubenswrapper[4725]: I1014 14:05:50.114899 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82341669-ddec-461a-bd2f-b1fd37d04ab3-utilities\") pod \"redhat-operators-6dj89\" (UID: \"82341669-ddec-461a-bd2f-b1fd37d04ab3\") " pod="openshift-marketplace/redhat-operators-6dj89" Oct 14 14:05:50 crc kubenswrapper[4725]: I1014 14:05:50.152066 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88jnm\" (UniqueName: \"kubernetes.io/projected/82341669-ddec-461a-bd2f-b1fd37d04ab3-kube-api-access-88jnm\") pod \"redhat-operators-6dj89\" (UID: \"82341669-ddec-461a-bd2f-b1fd37d04ab3\") " pod="openshift-marketplace/redhat-operators-6dj89" Oct 14 14:05:50 crc kubenswrapper[4725]: I1014 14:05:50.162474 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6dj89" Oct 14 14:05:50 crc kubenswrapper[4725]: I1014 14:05:50.672275 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6dj89"] Oct 14 14:05:51 crc kubenswrapper[4725]: I1014 14:05:51.410147 4725 generic.go:334] "Generic (PLEG): container finished" podID="82341669-ddec-461a-bd2f-b1fd37d04ab3" containerID="5e64be8802f28eea043f669fb731c80d5e9e6100cd6f1d5cbad5aa57cc1ea13e" exitCode=0 Oct 14 14:05:51 crc kubenswrapper[4725]: I1014 14:05:51.410259 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dj89" event={"ID":"82341669-ddec-461a-bd2f-b1fd37d04ab3","Type":"ContainerDied","Data":"5e64be8802f28eea043f669fb731c80d5e9e6100cd6f1d5cbad5aa57cc1ea13e"} Oct 14 14:05:51 crc kubenswrapper[4725]: I1014 14:05:51.410844 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dj89" event={"ID":"82341669-ddec-461a-bd2f-b1fd37d04ab3","Type":"ContainerStarted","Data":"85647c00b7cb95f7bda13b735dd82284986575637abb872a0a35d21267358f08"} Oct 14 14:05:51 crc kubenswrapper[4725]: I1014 14:05:51.412947 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 14:05:52 crc kubenswrapper[4725]: I1014 14:05:52.421118 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dj89" event={"ID":"82341669-ddec-461a-bd2f-b1fd37d04ab3","Type":"ContainerStarted","Data":"cefd60c7f3e05cbc667e7888316faf55a56e51932930ceb786b3f976553eb8d9"} Oct 14 14:05:52 crc kubenswrapper[4725]: I1014 14:05:52.921926 4725 scope.go:117] "RemoveContainer" containerID="2be0c39729f010a9af8428f8e57dd6b737cbe9299288deecdc98318e8f6194f2" Oct 14 14:05:52 crc kubenswrapper[4725]: E1014 14:05:52.922248 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:05:53 crc kubenswrapper[4725]: I1014 14:05:53.439207 4725 generic.go:334] "Generic (PLEG): container finished" podID="82341669-ddec-461a-bd2f-b1fd37d04ab3" containerID="cefd60c7f3e05cbc667e7888316faf55a56e51932930ceb786b3f976553eb8d9" exitCode=0 Oct 14 14:05:53 crc kubenswrapper[4725]: I1014 14:05:53.439446 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dj89" event={"ID":"82341669-ddec-461a-bd2f-b1fd37d04ab3","Type":"ContainerDied","Data":"cefd60c7f3e05cbc667e7888316faf55a56e51932930ceb786b3f976553eb8d9"} Oct 14 14:05:55 crc kubenswrapper[4725]: I1014 14:05:55.457827 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dj89" event={"ID":"82341669-ddec-461a-bd2f-b1fd37d04ab3","Type":"ContainerStarted","Data":"5906d9d72cd9ffa882f0a7649669b26abde114ffe10983d8bd2f219357e8f053"} Oct 14 14:05:55 crc kubenswrapper[4725]: I1014 14:05:55.496714 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6dj89" podStartSLOduration=3.612753491 podStartE2EDuration="6.496693545s" podCreationTimestamp="2025-10-14 14:05:49 +0000 UTC" firstStartedPulling="2025-10-14 14:05:51.412733816 +0000 UTC m=+3068.261168625" lastFinishedPulling="2025-10-14 14:05:54.29667387 +0000 UTC m=+3071.145108679" observedRunningTime="2025-10-14 14:05:55.481976831 +0000 UTC m=+3072.330411650" watchObservedRunningTime="2025-10-14 14:05:55.496693545 +0000 UTC m=+3072.345128364" Oct 14 14:06:00 crc kubenswrapper[4725]: I1014 14:06:00.162924 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6dj89" Oct 14 14:06:00 crc kubenswrapper[4725]: I1014 14:06:00.163388 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6dj89" Oct 14 14:06:00 crc kubenswrapper[4725]: I1014 14:06:00.231596 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6dj89" Oct 14 14:06:00 crc kubenswrapper[4725]: I1014 14:06:00.564787 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6dj89" Oct 14 14:06:00 crc kubenswrapper[4725]: I1014 14:06:00.608334 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6dj89"] Oct 14 14:06:02 crc kubenswrapper[4725]: I1014 14:06:02.538426 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6dj89" podUID="82341669-ddec-461a-bd2f-b1fd37d04ab3" containerName="registry-server" containerID="cri-o://5906d9d72cd9ffa882f0a7649669b26abde114ffe10983d8bd2f219357e8f053" gracePeriod=2 Oct 14 14:06:03 crc kubenswrapper[4725]: I1014 14:06:03.062128 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6dj89" Oct 14 14:06:03 crc kubenswrapper[4725]: I1014 14:06:03.179981 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88jnm\" (UniqueName: \"kubernetes.io/projected/82341669-ddec-461a-bd2f-b1fd37d04ab3-kube-api-access-88jnm\") pod \"82341669-ddec-461a-bd2f-b1fd37d04ab3\" (UID: \"82341669-ddec-461a-bd2f-b1fd37d04ab3\") " Oct 14 14:06:03 crc kubenswrapper[4725]: I1014 14:06:03.180046 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82341669-ddec-461a-bd2f-b1fd37d04ab3-catalog-content\") pod \"82341669-ddec-461a-bd2f-b1fd37d04ab3\" (UID: \"82341669-ddec-461a-bd2f-b1fd37d04ab3\") " Oct 14 14:06:03 crc kubenswrapper[4725]: I1014 14:06:03.180169 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82341669-ddec-461a-bd2f-b1fd37d04ab3-utilities\") pod \"82341669-ddec-461a-bd2f-b1fd37d04ab3\" (UID: \"82341669-ddec-461a-bd2f-b1fd37d04ab3\") " Oct 14 14:06:03 crc kubenswrapper[4725]: I1014 14:06:03.181694 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82341669-ddec-461a-bd2f-b1fd37d04ab3-utilities" (OuterVolumeSpecName: "utilities") pod "82341669-ddec-461a-bd2f-b1fd37d04ab3" (UID: "82341669-ddec-461a-bd2f-b1fd37d04ab3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:06:03 crc kubenswrapper[4725]: I1014 14:06:03.187664 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82341669-ddec-461a-bd2f-b1fd37d04ab3-kube-api-access-88jnm" (OuterVolumeSpecName: "kube-api-access-88jnm") pod "82341669-ddec-461a-bd2f-b1fd37d04ab3" (UID: "82341669-ddec-461a-bd2f-b1fd37d04ab3"). InnerVolumeSpecName "kube-api-access-88jnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:06:03 crc kubenswrapper[4725]: I1014 14:06:03.283357 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82341669-ddec-461a-bd2f-b1fd37d04ab3-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 14:06:03 crc kubenswrapper[4725]: I1014 14:06:03.283390 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88jnm\" (UniqueName: \"kubernetes.io/projected/82341669-ddec-461a-bd2f-b1fd37d04ab3-kube-api-access-88jnm\") on node \"crc\" DevicePath \"\"" Oct 14 14:06:03 crc kubenswrapper[4725]: I1014 14:06:03.307226 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82341669-ddec-461a-bd2f-b1fd37d04ab3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82341669-ddec-461a-bd2f-b1fd37d04ab3" (UID: "82341669-ddec-461a-bd2f-b1fd37d04ab3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:06:03 crc kubenswrapper[4725]: I1014 14:06:03.385141 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82341669-ddec-461a-bd2f-b1fd37d04ab3-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 14:06:03 crc kubenswrapper[4725]: I1014 14:06:03.552057 4725 generic.go:334] "Generic (PLEG): container finished" podID="82341669-ddec-461a-bd2f-b1fd37d04ab3" containerID="5906d9d72cd9ffa882f0a7649669b26abde114ffe10983d8bd2f219357e8f053" exitCode=0 Oct 14 14:06:03 crc kubenswrapper[4725]: I1014 14:06:03.552101 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dj89" event={"ID":"82341669-ddec-461a-bd2f-b1fd37d04ab3","Type":"ContainerDied","Data":"5906d9d72cd9ffa882f0a7649669b26abde114ffe10983d8bd2f219357e8f053"} Oct 14 14:06:03 crc kubenswrapper[4725]: I1014 14:06:03.552128 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dj89" event={"ID":"82341669-ddec-461a-bd2f-b1fd37d04ab3","Type":"ContainerDied","Data":"85647c00b7cb95f7bda13b735dd82284986575637abb872a0a35d21267358f08"} Oct 14 14:06:03 crc kubenswrapper[4725]: I1014 14:06:03.552145 4725 scope.go:117] "RemoveContainer" containerID="5906d9d72cd9ffa882f0a7649669b26abde114ffe10983d8bd2f219357e8f053" Oct 14 14:06:03 crc kubenswrapper[4725]: I1014 14:06:03.552156 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6dj89" Oct 14 14:06:03 crc kubenswrapper[4725]: I1014 14:06:03.573302 4725 scope.go:117] "RemoveContainer" containerID="cefd60c7f3e05cbc667e7888316faf55a56e51932930ceb786b3f976553eb8d9" Oct 14 14:06:03 crc kubenswrapper[4725]: I1014 14:06:03.594264 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6dj89"] Oct 14 14:06:03 crc kubenswrapper[4725]: I1014 14:06:03.605577 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6dj89"] Oct 14 14:06:03 crc kubenswrapper[4725]: I1014 14:06:03.614627 4725 scope.go:117] "RemoveContainer" containerID="5e64be8802f28eea043f669fb731c80d5e9e6100cd6f1d5cbad5aa57cc1ea13e" Oct 14 14:06:03 crc kubenswrapper[4725]: I1014 14:06:03.672953 4725 scope.go:117] "RemoveContainer" containerID="5906d9d72cd9ffa882f0a7649669b26abde114ffe10983d8bd2f219357e8f053" Oct 14 14:06:03 crc kubenswrapper[4725]: E1014 14:06:03.673546 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5906d9d72cd9ffa882f0a7649669b26abde114ffe10983d8bd2f219357e8f053\": container with ID starting with 5906d9d72cd9ffa882f0a7649669b26abde114ffe10983d8bd2f219357e8f053 not found: ID does not exist" containerID="5906d9d72cd9ffa882f0a7649669b26abde114ffe10983d8bd2f219357e8f053" Oct 14 14:06:03 crc kubenswrapper[4725]: I1014 14:06:03.673653 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5906d9d72cd9ffa882f0a7649669b26abde114ffe10983d8bd2f219357e8f053"} err="failed to get container status \"5906d9d72cd9ffa882f0a7649669b26abde114ffe10983d8bd2f219357e8f053\": rpc error: code = NotFound desc = could not find container \"5906d9d72cd9ffa882f0a7649669b26abde114ffe10983d8bd2f219357e8f053\": container with ID starting with 5906d9d72cd9ffa882f0a7649669b26abde114ffe10983d8bd2f219357e8f053 not found: ID does not exist" Oct 14 14:06:03 crc kubenswrapper[4725]: I1014 14:06:03.673686 4725 scope.go:117] "RemoveContainer" containerID="cefd60c7f3e05cbc667e7888316faf55a56e51932930ceb786b3f976553eb8d9" Oct 14 14:06:03 crc kubenswrapper[4725]: E1014 14:06:03.674072 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cefd60c7f3e05cbc667e7888316faf55a56e51932930ceb786b3f976553eb8d9\": container with ID starting with cefd60c7f3e05cbc667e7888316faf55a56e51932930ceb786b3f976553eb8d9 not found: ID does not exist" containerID="cefd60c7f3e05cbc667e7888316faf55a56e51932930ceb786b3f976553eb8d9" Oct 14 14:06:03 crc kubenswrapper[4725]: I1014 14:06:03.674113 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cefd60c7f3e05cbc667e7888316faf55a56e51932930ceb786b3f976553eb8d9"} err="failed to get container status \"cefd60c7f3e05cbc667e7888316faf55a56e51932930ceb786b3f976553eb8d9\": rpc error: code = NotFound desc = could not find container \"cefd60c7f3e05cbc667e7888316faf55a56e51932930ceb786b3f976553eb8d9\": container with ID starting with cefd60c7f3e05cbc667e7888316faf55a56e51932930ceb786b3f976553eb8d9 not found: ID does not exist" Oct 14 14:06:03 crc kubenswrapper[4725]: I1014 14:06:03.674141 4725 scope.go:117] "RemoveContainer" containerID="5e64be8802f28eea043f669fb731c80d5e9e6100cd6f1d5cbad5aa57cc1ea13e" Oct 14 14:06:03 crc kubenswrapper[4725]: E1014 14:06:03.674411 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e64be8802f28eea043f669fb731c80d5e9e6100cd6f1d5cbad5aa57cc1ea13e\": container with ID starting with 5e64be8802f28eea043f669fb731c80d5e9e6100cd6f1d5cbad5aa57cc1ea13e not found: ID does not exist" containerID="5e64be8802f28eea043f669fb731c80d5e9e6100cd6f1d5cbad5aa57cc1ea13e" Oct 14 14:06:03 crc kubenswrapper[4725]: I1014 14:06:03.674463 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e64be8802f28eea043f669fb731c80d5e9e6100cd6f1d5cbad5aa57cc1ea13e"} err="failed to get container status \"5e64be8802f28eea043f669fb731c80d5e9e6100cd6f1d5cbad5aa57cc1ea13e\": rpc error: code = NotFound desc = could not find container \"5e64be8802f28eea043f669fb731c80d5e9e6100cd6f1d5cbad5aa57cc1ea13e\": container with ID starting with 5e64be8802f28eea043f669fb731c80d5e9e6100cd6f1d5cbad5aa57cc1ea13e not found: ID does not exist" Oct 14 14:06:03 crc kubenswrapper[4725]: I1014 14:06:03.935138 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82341669-ddec-461a-bd2f-b1fd37d04ab3" path="/var/lib/kubelet/pods/82341669-ddec-461a-bd2f-b1fd37d04ab3/volumes" Oct 14 14:06:05 crc kubenswrapper[4725]: I1014 14:06:05.921374 4725 scope.go:117] "RemoveContainer" containerID="2be0c39729f010a9af8428f8e57dd6b737cbe9299288deecdc98318e8f6194f2" Oct 14 14:06:05 crc kubenswrapper[4725]: E1014 14:06:05.922619 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:06:16 crc kubenswrapper[4725]: I1014 14:06:16.921136 4725 scope.go:117] "RemoveContainer" containerID="2be0c39729f010a9af8428f8e57dd6b737cbe9299288deecdc98318e8f6194f2" Oct 14 14:06:16 crc kubenswrapper[4725]: E1014 14:06:16.921793 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:06:29 crc kubenswrapper[4725]: I1014 14:06:29.921472 4725 scope.go:117] "RemoveContainer" containerID="2be0c39729f010a9af8428f8e57dd6b737cbe9299288deecdc98318e8f6194f2" Oct 14 14:06:29 crc kubenswrapper[4725]: E1014 14:06:29.922369 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:06:40 crc kubenswrapper[4725]: I1014 14:06:40.922368 4725 scope.go:117] "RemoveContainer" containerID="2be0c39729f010a9af8428f8e57dd6b737cbe9299288deecdc98318e8f6194f2" Oct 14 14:06:40 crc kubenswrapper[4725]: E1014 14:06:40.923801 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:06:54 crc kubenswrapper[4725]: I1014 14:06:54.922266 4725 scope.go:117] "RemoveContainer" containerID="2be0c39729f010a9af8428f8e57dd6b737cbe9299288deecdc98318e8f6194f2" Oct 14 14:06:54 crc kubenswrapper[4725]: E1014 14:06:54.923010 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:07:06 crc kubenswrapper[4725]: I1014 14:07:06.921206 4725 scope.go:117] "RemoveContainer" containerID="2be0c39729f010a9af8428f8e57dd6b737cbe9299288deecdc98318e8f6194f2" Oct 14 14:07:06 crc kubenswrapper[4725]: E1014 14:07:06.921905 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:07:18 crc kubenswrapper[4725]: I1014 14:07:18.921195 4725 scope.go:117] "RemoveContainer" containerID="2be0c39729f010a9af8428f8e57dd6b737cbe9299288deecdc98318e8f6194f2" Oct 14 14:07:18 crc kubenswrapper[4725]: E1014 14:07:18.922086 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:07:29 crc kubenswrapper[4725]: I1014 14:07:29.925210 4725 scope.go:117] "RemoveContainer" containerID="2be0c39729f010a9af8428f8e57dd6b737cbe9299288deecdc98318e8f6194f2" Oct 14 14:07:29 crc kubenswrapper[4725]: E1014 14:07:29.927142 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:07:41 crc kubenswrapper[4725]: I1014 14:07:41.922172 4725 scope.go:117] "RemoveContainer" containerID="2be0c39729f010a9af8428f8e57dd6b737cbe9299288deecdc98318e8f6194f2" Oct 14 14:07:41 crc kubenswrapper[4725]: E1014 14:07:41.923010 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:07:48 crc kubenswrapper[4725]: I1014 14:07:48.618975 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s4ntv"] Oct 14 14:07:48 crc kubenswrapper[4725]: E1014 14:07:48.620062 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82341669-ddec-461a-bd2f-b1fd37d04ab3" containerName="registry-server" Oct 14 14:07:48 crc kubenswrapper[4725]: I1014 14:07:48.620081 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="82341669-ddec-461a-bd2f-b1fd37d04ab3" containerName="registry-server" Oct 14 14:07:48 crc kubenswrapper[4725]: E1014 14:07:48.620132 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82341669-ddec-461a-bd2f-b1fd37d04ab3" containerName="extract-utilities" Oct 14 14:07:48 crc kubenswrapper[4725]: I1014 14:07:48.620141 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="82341669-ddec-461a-bd2f-b1fd37d04ab3" containerName="extract-utilities" Oct 14 14:07:48 crc kubenswrapper[4725]: E1014 14:07:48.620158 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82341669-ddec-461a-bd2f-b1fd37d04ab3" containerName="extract-content" Oct 14 14:07:48 crc kubenswrapper[4725]: I1014 14:07:48.620167 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="82341669-ddec-461a-bd2f-b1fd37d04ab3" containerName="extract-content" Oct 14 14:07:48 crc kubenswrapper[4725]: I1014 14:07:48.620436 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="82341669-ddec-461a-bd2f-b1fd37d04ab3" containerName="registry-server" Oct 14 14:07:48 crc kubenswrapper[4725]: I1014 14:07:48.622819 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s4ntv" Oct 14 14:07:48 crc kubenswrapper[4725]: I1014 14:07:48.630059 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s4ntv"] Oct 14 14:07:48 crc kubenswrapper[4725]: I1014 14:07:48.811518 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ca0a957-2f46-47f7-b577-ba370b6f1ef1-catalog-content\") pod \"certified-operators-s4ntv\" (UID: \"0ca0a957-2f46-47f7-b577-ba370b6f1ef1\") " pod="openshift-marketplace/certified-operators-s4ntv" Oct 14 14:07:48 crc kubenswrapper[4725]: I1014 14:07:48.812005 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ca0a957-2f46-47f7-b577-ba370b6f1ef1-utilities\") pod \"certified-operators-s4ntv\" (UID: \"0ca0a957-2f46-47f7-b577-ba370b6f1ef1\") " pod="openshift-marketplace/certified-operators-s4ntv" Oct 14 14:07:48 crc kubenswrapper[4725]: I1014 14:07:48.812056 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ktwq\" (UniqueName: \"kubernetes.io/projected/0ca0a957-2f46-47f7-b577-ba370b6f1ef1-kube-api-access-6ktwq\") pod \"certified-operators-s4ntv\" (UID: \"0ca0a957-2f46-47f7-b577-ba370b6f1ef1\") " pod="openshift-marketplace/certified-operators-s4ntv" Oct 14 14:07:48 crc kubenswrapper[4725]: I1014 14:07:48.914447 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ca0a957-2f46-47f7-b577-ba370b6f1ef1-utilities\") pod \"certified-operators-s4ntv\" (UID: \"0ca0a957-2f46-47f7-b577-ba370b6f1ef1\") " pod="openshift-marketplace/certified-operators-s4ntv" Oct 14 14:07:48 crc kubenswrapper[4725]: I1014 14:07:48.914504 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ktwq\" (UniqueName: \"kubernetes.io/projected/0ca0a957-2f46-47f7-b577-ba370b6f1ef1-kube-api-access-6ktwq\") pod \"certified-operators-s4ntv\" (UID: \"0ca0a957-2f46-47f7-b577-ba370b6f1ef1\") " pod="openshift-marketplace/certified-operators-s4ntv" Oct 14 14:07:48 crc kubenswrapper[4725]: I1014 14:07:48.914562 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ca0a957-2f46-47f7-b577-ba370b6f1ef1-catalog-content\") pod \"certified-operators-s4ntv\" (UID: \"0ca0a957-2f46-47f7-b577-ba370b6f1ef1\") " pod="openshift-marketplace/certified-operators-s4ntv" Oct 14 14:07:48 crc kubenswrapper[4725]: I1014 14:07:48.915055 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ca0a957-2f46-47f7-b577-ba370b6f1ef1-catalog-content\") pod \"certified-operators-s4ntv\" (UID: \"0ca0a957-2f46-47f7-b577-ba370b6f1ef1\") " pod="openshift-marketplace/certified-operators-s4ntv" Oct 14 14:07:48 crc kubenswrapper[4725]: I1014 14:07:48.915270 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ca0a957-2f46-47f7-b577-ba370b6f1ef1-utilities\") pod \"certified-operators-s4ntv\" (UID: \"0ca0a957-2f46-47f7-b577-ba370b6f1ef1\") " pod="openshift-marketplace/certified-operators-s4ntv" Oct 14 14:07:48 crc kubenswrapper[4725]: I1014 14:07:48.954266 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ktwq\" (UniqueName: \"kubernetes.io/projected/0ca0a957-2f46-47f7-b577-ba370b6f1ef1-kube-api-access-6ktwq\") pod \"certified-operators-s4ntv\" (UID: \"0ca0a957-2f46-47f7-b577-ba370b6f1ef1\") " pod="openshift-marketplace/certified-operators-s4ntv" Oct 14 14:07:48 crc kubenswrapper[4725]: I1014 14:07:48.994508 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s4ntv" Oct 14 14:07:49 crc kubenswrapper[4725]: I1014 14:07:49.501255 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s4ntv"] Oct 14 14:07:49 crc kubenswrapper[4725]: I1014 14:07:49.562882 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4ntv" event={"ID":"0ca0a957-2f46-47f7-b577-ba370b6f1ef1","Type":"ContainerStarted","Data":"050682ac55a2f13d043b41faab900947ff0974b1fcbce8424d1fe4f00132a874"} Oct 14 14:07:50 crc kubenswrapper[4725]: I1014 14:07:50.581404 4725 generic.go:334] "Generic (PLEG): container finished" podID="0ca0a957-2f46-47f7-b577-ba370b6f1ef1" containerID="107c898346bed6f9aab68fb575aa2ec30f2d5f8da990a7d529b68335964c1699" exitCode=0 Oct 14 14:07:50 crc kubenswrapper[4725]: I1014 14:07:50.581470 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4ntv" event={"ID":"0ca0a957-2f46-47f7-b577-ba370b6f1ef1","Type":"ContainerDied","Data":"107c898346bed6f9aab68fb575aa2ec30f2d5f8da990a7d529b68335964c1699"} Oct 14 14:07:52 crc kubenswrapper[4725]: I1014 14:07:52.605782 4725 generic.go:334] "Generic (PLEG): container finished" podID="0ca0a957-2f46-47f7-b577-ba370b6f1ef1" containerID="b3902d4419a084deb5b7afd96355f32224ded3f3b89858767baaa799e8a633bb" exitCode=0 Oct 14 14:07:52 crc kubenswrapper[4725]: I1014 14:07:52.605832 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4ntv" event={"ID":"0ca0a957-2f46-47f7-b577-ba370b6f1ef1","Type":"ContainerDied","Data":"b3902d4419a084deb5b7afd96355f32224ded3f3b89858767baaa799e8a633bb"} Oct 14 14:07:53 crc kubenswrapper[4725]: I1014 14:07:53.617043 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4ntv" event={"ID":"0ca0a957-2f46-47f7-b577-ba370b6f1ef1","Type":"ContainerStarted","Data":"71723eb2f839c342b7d9901260c23b34f79d63a6f232a2abdffef45bf8a6183b"} Oct 14 14:07:53 crc kubenswrapper[4725]: I1014 14:07:53.636005 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s4ntv" podStartSLOduration=3.085046517 podStartE2EDuration="5.635982726s" podCreationTimestamp="2025-10-14 14:07:48 +0000 UTC" firstStartedPulling="2025-10-14 14:07:50.584362744 +0000 UTC m=+3187.432797553" lastFinishedPulling="2025-10-14 14:07:53.135298953 +0000 UTC m=+3189.983733762" observedRunningTime="2025-10-14 14:07:53.633867448 +0000 UTC m=+3190.482302267" watchObservedRunningTime="2025-10-14 14:07:53.635982726 +0000 UTC m=+3190.484417535" Oct 14 14:07:54 crc kubenswrapper[4725]: I1014 14:07:54.922190 4725 scope.go:117] "RemoveContainer" containerID="2be0c39729f010a9af8428f8e57dd6b737cbe9299288deecdc98318e8f6194f2" Oct 14 14:07:54 crc kubenswrapper[4725]: E1014 14:07:54.922696 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:07:58 crc kubenswrapper[4725]: I1014 14:07:58.994644 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s4ntv" Oct 14 14:07:58 crc kubenswrapper[4725]: I1014 14:07:58.995289 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s4ntv" Oct 14 14:07:59 crc kubenswrapper[4725]: I1014 14:07:59.048703 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s4ntv" Oct 14 14:07:59 crc kubenswrapper[4725]: I1014 14:07:59.732277 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s4ntv" Oct 14 14:07:59 crc kubenswrapper[4725]: I1014 14:07:59.791015 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s4ntv"] Oct 14 14:08:01 crc kubenswrapper[4725]: I1014 14:08:01.699665 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s4ntv" podUID="0ca0a957-2f46-47f7-b577-ba370b6f1ef1" containerName="registry-server" containerID="cri-o://71723eb2f839c342b7d9901260c23b34f79d63a6f232a2abdffef45bf8a6183b" gracePeriod=2 Oct 14 14:08:02 crc kubenswrapper[4725]: I1014 14:08:02.201679 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s4ntv" Oct 14 14:08:02 crc kubenswrapper[4725]: I1014 14:08:02.281699 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ktwq\" (UniqueName: \"kubernetes.io/projected/0ca0a957-2f46-47f7-b577-ba370b6f1ef1-kube-api-access-6ktwq\") pod \"0ca0a957-2f46-47f7-b577-ba370b6f1ef1\" (UID: \"0ca0a957-2f46-47f7-b577-ba370b6f1ef1\") " Oct 14 14:08:02 crc kubenswrapper[4725]: I1014 14:08:02.281745 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ca0a957-2f46-47f7-b577-ba370b6f1ef1-utilities\") pod \"0ca0a957-2f46-47f7-b577-ba370b6f1ef1\" (UID: \"0ca0a957-2f46-47f7-b577-ba370b6f1ef1\") " Oct 14 14:08:02 crc kubenswrapper[4725]: I1014 14:08:02.281803 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ca0a957-2f46-47f7-b577-ba370b6f1ef1-catalog-content\") pod \"0ca0a957-2f46-47f7-b577-ba370b6f1ef1\" (UID: \"0ca0a957-2f46-47f7-b577-ba370b6f1ef1\") " Oct 14 14:08:02 crc kubenswrapper[4725]: I1014 14:08:02.282908 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ca0a957-2f46-47f7-b577-ba370b6f1ef1-utilities" (OuterVolumeSpecName: "utilities") pod "0ca0a957-2f46-47f7-b577-ba370b6f1ef1" (UID: "0ca0a957-2f46-47f7-b577-ba370b6f1ef1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:08:02 crc kubenswrapper[4725]: I1014 14:08:02.288524 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ca0a957-2f46-47f7-b577-ba370b6f1ef1-kube-api-access-6ktwq" (OuterVolumeSpecName: "kube-api-access-6ktwq") pod "0ca0a957-2f46-47f7-b577-ba370b6f1ef1" (UID: "0ca0a957-2f46-47f7-b577-ba370b6f1ef1"). InnerVolumeSpecName "kube-api-access-6ktwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:08:02 crc kubenswrapper[4725]: I1014 14:08:02.337048 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ca0a957-2f46-47f7-b577-ba370b6f1ef1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ca0a957-2f46-47f7-b577-ba370b6f1ef1" (UID: "0ca0a957-2f46-47f7-b577-ba370b6f1ef1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:08:02 crc kubenswrapper[4725]: I1014 14:08:02.384248 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ktwq\" (UniqueName: \"kubernetes.io/projected/0ca0a957-2f46-47f7-b577-ba370b6f1ef1-kube-api-access-6ktwq\") on node \"crc\" DevicePath \"\"" Oct 14 14:08:02 crc kubenswrapper[4725]: I1014 14:08:02.384290 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ca0a957-2f46-47f7-b577-ba370b6f1ef1-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 14:08:02 crc kubenswrapper[4725]: I1014 14:08:02.384303 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ca0a957-2f46-47f7-b577-ba370b6f1ef1-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 14:08:02 crc kubenswrapper[4725]: I1014 14:08:02.713033 4725 generic.go:334] "Generic (PLEG): container finished" podID="0ca0a957-2f46-47f7-b577-ba370b6f1ef1" containerID="71723eb2f839c342b7d9901260c23b34f79d63a6f232a2abdffef45bf8a6183b" exitCode=0 Oct 14 14:08:02 crc kubenswrapper[4725]: I1014 14:08:02.713082 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4ntv" event={"ID":"0ca0a957-2f46-47f7-b577-ba370b6f1ef1","Type":"ContainerDied","Data":"71723eb2f839c342b7d9901260c23b34f79d63a6f232a2abdffef45bf8a6183b"} Oct 14 14:08:02 crc kubenswrapper[4725]: I1014 14:08:02.713097 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s4ntv" Oct 14 14:08:02 crc kubenswrapper[4725]: I1014 14:08:02.713121 4725 scope.go:117] "RemoveContainer" containerID="71723eb2f839c342b7d9901260c23b34f79d63a6f232a2abdffef45bf8a6183b" Oct 14 14:08:02 crc kubenswrapper[4725]: I1014 14:08:02.713109 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4ntv" event={"ID":"0ca0a957-2f46-47f7-b577-ba370b6f1ef1","Type":"ContainerDied","Data":"050682ac55a2f13d043b41faab900947ff0974b1fcbce8424d1fe4f00132a874"} Oct 14 14:08:02 crc kubenswrapper[4725]: I1014 14:08:02.733562 4725 scope.go:117] "RemoveContainer" containerID="b3902d4419a084deb5b7afd96355f32224ded3f3b89858767baaa799e8a633bb" Oct 14 14:08:02 crc kubenswrapper[4725]: I1014 14:08:02.747872 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s4ntv"] Oct 14 14:08:02 crc kubenswrapper[4725]: I1014 14:08:02.758357 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s4ntv"] Oct 14 14:08:02 crc kubenswrapper[4725]: I1014 14:08:02.772124 4725 scope.go:117] "RemoveContainer" containerID="107c898346bed6f9aab68fb575aa2ec30f2d5f8da990a7d529b68335964c1699" Oct 14 14:08:02 crc kubenswrapper[4725]: I1014 14:08:02.808407 4725 scope.go:117] "RemoveContainer" containerID="71723eb2f839c342b7d9901260c23b34f79d63a6f232a2abdffef45bf8a6183b" Oct 14 14:08:02 crc kubenswrapper[4725]: E1014 14:08:02.808972 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71723eb2f839c342b7d9901260c23b34f79d63a6f232a2abdffef45bf8a6183b\": container with ID starting with 71723eb2f839c342b7d9901260c23b34f79d63a6f232a2abdffef45bf8a6183b not found: ID does not exist" containerID="71723eb2f839c342b7d9901260c23b34f79d63a6f232a2abdffef45bf8a6183b" Oct 14 14:08:02 crc kubenswrapper[4725]: I1014 14:08:02.808996 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71723eb2f839c342b7d9901260c23b34f79d63a6f232a2abdffef45bf8a6183b"} err="failed to get container status \"71723eb2f839c342b7d9901260c23b34f79d63a6f232a2abdffef45bf8a6183b\": rpc error: code = NotFound desc = could not find container \"71723eb2f839c342b7d9901260c23b34f79d63a6f232a2abdffef45bf8a6183b\": container with ID starting with 71723eb2f839c342b7d9901260c23b34f79d63a6f232a2abdffef45bf8a6183b not found: ID does not exist" Oct 14 14:08:02 crc kubenswrapper[4725]: I1014 14:08:02.809018 4725 scope.go:117] "RemoveContainer" containerID="b3902d4419a084deb5b7afd96355f32224ded3f3b89858767baaa799e8a633bb" Oct 14 14:08:02 crc kubenswrapper[4725]: E1014 14:08:02.809489 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3902d4419a084deb5b7afd96355f32224ded3f3b89858767baaa799e8a633bb\": container with ID starting with b3902d4419a084deb5b7afd96355f32224ded3f3b89858767baaa799e8a633bb not found: ID does not exist" containerID="b3902d4419a084deb5b7afd96355f32224ded3f3b89858767baaa799e8a633bb" Oct 14 14:08:02 crc kubenswrapper[4725]: I1014 14:08:02.809510 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3902d4419a084deb5b7afd96355f32224ded3f3b89858767baaa799e8a633bb"} err="failed to get container status \"b3902d4419a084deb5b7afd96355f32224ded3f3b89858767baaa799e8a633bb\": rpc error: code = NotFound desc = could not find container \"b3902d4419a084deb5b7afd96355f32224ded3f3b89858767baaa799e8a633bb\": container with ID starting with b3902d4419a084deb5b7afd96355f32224ded3f3b89858767baaa799e8a633bb not found: ID does not exist" Oct 14 14:08:02 crc kubenswrapper[4725]: I1014 14:08:02.809529 4725 scope.go:117] "RemoveContainer" containerID="107c898346bed6f9aab68fb575aa2ec30f2d5f8da990a7d529b68335964c1699" Oct 14 14:08:02 crc kubenswrapper[4725]: E1014 14:08:02.809862 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"107c898346bed6f9aab68fb575aa2ec30f2d5f8da990a7d529b68335964c1699\": container with ID starting with 107c898346bed6f9aab68fb575aa2ec30f2d5f8da990a7d529b68335964c1699 not found: ID does not exist" containerID="107c898346bed6f9aab68fb575aa2ec30f2d5f8da990a7d529b68335964c1699" Oct 14 14:08:02 crc kubenswrapper[4725]: I1014 14:08:02.809899 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"107c898346bed6f9aab68fb575aa2ec30f2d5f8da990a7d529b68335964c1699"} err="failed to get container status \"107c898346bed6f9aab68fb575aa2ec30f2d5f8da990a7d529b68335964c1699\": rpc error: code = NotFound desc = could not find container \"107c898346bed6f9aab68fb575aa2ec30f2d5f8da990a7d529b68335964c1699\": container with ID starting with 107c898346bed6f9aab68fb575aa2ec30f2d5f8da990a7d529b68335964c1699 not found: ID does not exist" Oct 14 14:08:03 crc kubenswrapper[4725]: I1014 14:08:03.935074 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ca0a957-2f46-47f7-b577-ba370b6f1ef1" path="/var/lib/kubelet/pods/0ca0a957-2f46-47f7-b577-ba370b6f1ef1/volumes" Oct 14 14:08:06 crc kubenswrapper[4725]: I1014 14:08:06.921714 4725 scope.go:117] "RemoveContainer" containerID="2be0c39729f010a9af8428f8e57dd6b737cbe9299288deecdc98318e8f6194f2" Oct 14 14:08:06 crc kubenswrapper[4725]: E1014 14:08:06.922334 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:08:21 crc kubenswrapper[4725]: I1014 14:08:21.921069 4725 scope.go:117] "RemoveContainer" containerID="2be0c39729f010a9af8428f8e57dd6b737cbe9299288deecdc98318e8f6194f2" Oct 14 14:08:21 crc kubenswrapper[4725]: E1014 14:08:21.921968 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:08:36 crc kubenswrapper[4725]: I1014 14:08:36.920798 4725 scope.go:117] "RemoveContainer" containerID="2be0c39729f010a9af8428f8e57dd6b737cbe9299288deecdc98318e8f6194f2" Oct 14 14:08:36 crc kubenswrapper[4725]: E1014 14:08:36.921570 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:08:47 crc kubenswrapper[4725]: I1014 14:08:47.921863 4725 scope.go:117] "RemoveContainer" containerID="2be0c39729f010a9af8428f8e57dd6b737cbe9299288deecdc98318e8f6194f2" Oct 14 14:08:47 crc kubenswrapper[4725]: E1014 14:08:47.923225 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:08:58 crc kubenswrapper[4725]: I1014 14:08:58.922324 4725 scope.go:117] "RemoveContainer" containerID="2be0c39729f010a9af8428f8e57dd6b737cbe9299288deecdc98318e8f6194f2" Oct 14 14:08:58 crc kubenswrapper[4725]: E1014 14:08:58.923577 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:09:10 crc kubenswrapper[4725]: I1014 14:09:10.921912 4725 scope.go:117] "RemoveContainer" containerID="2be0c39729f010a9af8428f8e57dd6b737cbe9299288deecdc98318e8f6194f2" Oct 14 14:09:11 crc kubenswrapper[4725]: I1014 14:09:11.326874 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" event={"ID":"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c","Type":"ContainerStarted","Data":"6fee48a36d192c8eadf8b48d4f629647a2d4809a7741f2119d1d453cb7ebd3e9"} Oct 14 14:09:22 crc kubenswrapper[4725]: I1014 14:09:22.937848 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9b6t9"] Oct 14 14:09:22 crc kubenswrapper[4725]: E1014 14:09:22.938856 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ca0a957-2f46-47f7-b577-ba370b6f1ef1" containerName="extract-content" Oct 14 14:09:22 crc kubenswrapper[4725]: I1014 14:09:22.938870 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ca0a957-2f46-47f7-b577-ba370b6f1ef1" containerName="extract-content" Oct 14 14:09:22 crc kubenswrapper[4725]: E1014 14:09:22.938902 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ca0a957-2f46-47f7-b577-ba370b6f1ef1" containerName="registry-server" Oct 14 14:09:22 crc kubenswrapper[4725]: I1014 14:09:22.938907 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ca0a957-2f46-47f7-b577-ba370b6f1ef1" containerName="registry-server" Oct 14 14:09:22 crc kubenswrapper[4725]: E1014 14:09:22.938924 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ca0a957-2f46-47f7-b577-ba370b6f1ef1" containerName="extract-utilities" Oct 14 14:09:22 crc kubenswrapper[4725]: I1014 14:09:22.938931 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ca0a957-2f46-47f7-b577-ba370b6f1ef1" containerName="extract-utilities" Oct 14 14:09:22 crc kubenswrapper[4725]: I1014 14:09:22.939158 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ca0a957-2f46-47f7-b577-ba370b6f1ef1" containerName="registry-server" Oct 14 14:09:22 crc kubenswrapper[4725]: I1014 14:09:22.941414 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9b6t9" Oct 14 14:09:22 crc kubenswrapper[4725]: I1014 14:09:22.951045 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9b6t9"] Oct 14 14:09:23 crc kubenswrapper[4725]: I1014 14:09:23.036135 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4d37cb7-85af-41d3-a3cf-5f25f518975e-catalog-content\") pod \"redhat-marketplace-9b6t9\" (UID: \"a4d37cb7-85af-41d3-a3cf-5f25f518975e\") " pod="openshift-marketplace/redhat-marketplace-9b6t9" Oct 14 14:09:23 crc kubenswrapper[4725]: I1014 14:09:23.036206 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4d37cb7-85af-41d3-a3cf-5f25f518975e-utilities\") pod \"redhat-marketplace-9b6t9\" (UID: \"a4d37cb7-85af-41d3-a3cf-5f25f518975e\") " pod="openshift-marketplace/redhat-marketplace-9b6t9" Oct 14 14:09:23 crc kubenswrapper[4725]: I1014 14:09:23.036365 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j969\" (UniqueName: \"kubernetes.io/projected/a4d37cb7-85af-41d3-a3cf-5f25f518975e-kube-api-access-5j969\") pod \"redhat-marketplace-9b6t9\" (UID: \"a4d37cb7-85af-41d3-a3cf-5f25f518975e\") " pod="openshift-marketplace/redhat-marketplace-9b6t9" Oct 14 14:09:23 crc kubenswrapper[4725]: I1014 14:09:23.137812 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4d37cb7-85af-41d3-a3cf-5f25f518975e-utilities\") pod \"redhat-marketplace-9b6t9\" (UID: \"a4d37cb7-85af-41d3-a3cf-5f25f518975e\") " pod="openshift-marketplace/redhat-marketplace-9b6t9" Oct 14 14:09:23 crc kubenswrapper[4725]: I1014 14:09:23.137964 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j969\" (UniqueName: \"kubernetes.io/projected/a4d37cb7-85af-41d3-a3cf-5f25f518975e-kube-api-access-5j969\") pod \"redhat-marketplace-9b6t9\" (UID: \"a4d37cb7-85af-41d3-a3cf-5f25f518975e\") " pod="openshift-marketplace/redhat-marketplace-9b6t9" Oct 14 14:09:23 crc kubenswrapper[4725]: I1014 14:09:23.138039 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4d37cb7-85af-41d3-a3cf-5f25f518975e-catalog-content\") pod \"redhat-marketplace-9b6t9\" (UID: \"a4d37cb7-85af-41d3-a3cf-5f25f518975e\") " pod="openshift-marketplace/redhat-marketplace-9b6t9" Oct 14 14:09:23 crc kubenswrapper[4725]: I1014 14:09:23.138317 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4d37cb7-85af-41d3-a3cf-5f25f518975e-utilities\") pod \"redhat-marketplace-9b6t9\" (UID: \"a4d37cb7-85af-41d3-a3cf-5f25f518975e\") " pod="openshift-marketplace/redhat-marketplace-9b6t9" Oct 14 14:09:23 crc kubenswrapper[4725]: I1014 14:09:23.138404 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4d37cb7-85af-41d3-a3cf-5f25f518975e-catalog-content\") pod \"redhat-marketplace-9b6t9\" (UID: \"a4d37cb7-85af-41d3-a3cf-5f25f518975e\") " pod="openshift-marketplace/redhat-marketplace-9b6t9" Oct 14 14:09:23 crc kubenswrapper[4725]: I1014 14:09:23.160300 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j969\" (UniqueName: \"kubernetes.io/projected/a4d37cb7-85af-41d3-a3cf-5f25f518975e-kube-api-access-5j969\") pod \"redhat-marketplace-9b6t9\" (UID: \"a4d37cb7-85af-41d3-a3cf-5f25f518975e\") " pod="openshift-marketplace/redhat-marketplace-9b6t9" Oct 14 14:09:23 crc kubenswrapper[4725]: I1014 14:09:23.316049 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9b6t9" Oct 14 14:09:23 crc kubenswrapper[4725]: I1014 14:09:23.762996 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9b6t9"] Oct 14 14:09:24 crc kubenswrapper[4725]: I1014 14:09:24.445989 4725 generic.go:334] "Generic (PLEG): container finished" podID="a4d37cb7-85af-41d3-a3cf-5f25f518975e" containerID="8be3e7fb117d1e47b85280b56b900e312b2dd2550be6a6ab04e76df2cfc34a56" exitCode=0 Oct 14 14:09:24 crc kubenswrapper[4725]: I1014 14:09:24.446077 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9b6t9" event={"ID":"a4d37cb7-85af-41d3-a3cf-5f25f518975e","Type":"ContainerDied","Data":"8be3e7fb117d1e47b85280b56b900e312b2dd2550be6a6ab04e76df2cfc34a56"} Oct 14 14:09:24 crc kubenswrapper[4725]: I1014 14:09:24.446308 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9b6t9" event={"ID":"a4d37cb7-85af-41d3-a3cf-5f25f518975e","Type":"ContainerStarted","Data":"62d9afae3d4ce0b40e3de8742c715fbb92380405c3b8cd421ccd30614035754c"} Oct 14 14:09:26 crc kubenswrapper[4725]: I1014 14:09:26.480051 4725 generic.go:334] "Generic (PLEG): container finished" podID="a4d37cb7-85af-41d3-a3cf-5f25f518975e" containerID="b438809ad8996faa547004d28092c5a2b96c490eac806de8026072c8da66af35" exitCode=0 Oct 14 14:09:26 crc kubenswrapper[4725]: I1014 14:09:26.480130 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9b6t9" event={"ID":"a4d37cb7-85af-41d3-a3cf-5f25f518975e","Type":"ContainerDied","Data":"b438809ad8996faa547004d28092c5a2b96c490eac806de8026072c8da66af35"} Oct 14 14:09:27 crc kubenswrapper[4725]: I1014 14:09:27.491400 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9b6t9" event={"ID":"a4d37cb7-85af-41d3-a3cf-5f25f518975e","Type":"ContainerStarted","Data":"91358d855990282429f4de44be7f479efbbf5cb77558edc19d04a4ff069bb7bd"} Oct 14 14:09:27 crc kubenswrapper[4725]: I1014 14:09:27.517825 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9b6t9" podStartSLOduration=3.082355829 podStartE2EDuration="5.517804815s" podCreationTimestamp="2025-10-14 14:09:22 +0000 UTC" firstStartedPulling="2025-10-14 14:09:24.447770014 +0000 UTC m=+3281.296204823" lastFinishedPulling="2025-10-14 14:09:26.883219 +0000 UTC m=+3283.731653809" observedRunningTime="2025-10-14 14:09:27.511927846 +0000 UTC m=+3284.360362645" watchObservedRunningTime="2025-10-14 14:09:27.517804815 +0000 UTC m=+3284.366239624" Oct 14 14:09:33 crc kubenswrapper[4725]: I1014 14:09:33.316779 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9b6t9" Oct 14 14:09:33 crc kubenswrapper[4725]: I1014 14:09:33.317430 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9b6t9" Oct 14 14:09:33 crc kubenswrapper[4725]: I1014 14:09:33.364467 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9b6t9" Oct 14 14:09:33 crc kubenswrapper[4725]: I1014 14:09:33.605110 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9b6t9" Oct 14 14:09:33 crc kubenswrapper[4725]: I1014 14:09:33.656713 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9b6t9"] Oct 14 14:09:35 crc kubenswrapper[4725]: I1014 14:09:35.579215 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9b6t9" podUID="a4d37cb7-85af-41d3-a3cf-5f25f518975e" containerName="registry-server" containerID="cri-o://91358d855990282429f4de44be7f479efbbf5cb77558edc19d04a4ff069bb7bd" gracePeriod=2 Oct 14 14:09:36 crc kubenswrapper[4725]: I1014 14:09:36.103001 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9b6t9" Oct 14 14:09:36 crc kubenswrapper[4725]: I1014 14:09:36.198085 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4d37cb7-85af-41d3-a3cf-5f25f518975e-utilities\") pod \"a4d37cb7-85af-41d3-a3cf-5f25f518975e\" (UID: \"a4d37cb7-85af-41d3-a3cf-5f25f518975e\") " Oct 14 14:09:36 crc kubenswrapper[4725]: I1014 14:09:36.198220 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4d37cb7-85af-41d3-a3cf-5f25f518975e-catalog-content\") pod \"a4d37cb7-85af-41d3-a3cf-5f25f518975e\" (UID: \"a4d37cb7-85af-41d3-a3cf-5f25f518975e\") " Oct 14 14:09:36 crc kubenswrapper[4725]: I1014 14:09:36.198291 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j969\" (UniqueName: \"kubernetes.io/projected/a4d37cb7-85af-41d3-a3cf-5f25f518975e-kube-api-access-5j969\") pod \"a4d37cb7-85af-41d3-a3cf-5f25f518975e\" (UID: \"a4d37cb7-85af-41d3-a3cf-5f25f518975e\") " Oct 14 14:09:36 crc kubenswrapper[4725]: I1014 14:09:36.199327 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4d37cb7-85af-41d3-a3cf-5f25f518975e-utilities" (OuterVolumeSpecName: "utilities") pod "a4d37cb7-85af-41d3-a3cf-5f25f518975e" (UID: "a4d37cb7-85af-41d3-a3cf-5f25f518975e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:09:36 crc kubenswrapper[4725]: I1014 14:09:36.205714 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4d37cb7-85af-41d3-a3cf-5f25f518975e-kube-api-access-5j969" (OuterVolumeSpecName: "kube-api-access-5j969") pod "a4d37cb7-85af-41d3-a3cf-5f25f518975e" (UID: "a4d37cb7-85af-41d3-a3cf-5f25f518975e"). InnerVolumeSpecName "kube-api-access-5j969". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:09:36 crc kubenswrapper[4725]: I1014 14:09:36.228114 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4d37cb7-85af-41d3-a3cf-5f25f518975e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4d37cb7-85af-41d3-a3cf-5f25f518975e" (UID: "a4d37cb7-85af-41d3-a3cf-5f25f518975e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:09:36 crc kubenswrapper[4725]: I1014 14:09:36.300365 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4d37cb7-85af-41d3-a3cf-5f25f518975e-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 14:09:36 crc kubenswrapper[4725]: I1014 14:09:36.300408 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4d37cb7-85af-41d3-a3cf-5f25f518975e-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 14:09:36 crc kubenswrapper[4725]: I1014 14:09:36.300419 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j969\" (UniqueName: \"kubernetes.io/projected/a4d37cb7-85af-41d3-a3cf-5f25f518975e-kube-api-access-5j969\") on node \"crc\" DevicePath \"\"" Oct 14 14:09:36 crc kubenswrapper[4725]: I1014 14:09:36.592471 4725 generic.go:334] "Generic (PLEG): container finished" podID="a4d37cb7-85af-41d3-a3cf-5f25f518975e" containerID="91358d855990282429f4de44be7f479efbbf5cb77558edc19d04a4ff069bb7bd" exitCode=0 Oct 14 14:09:36 crc kubenswrapper[4725]: I1014 14:09:36.592507 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9b6t9" event={"ID":"a4d37cb7-85af-41d3-a3cf-5f25f518975e","Type":"ContainerDied","Data":"91358d855990282429f4de44be7f479efbbf5cb77558edc19d04a4ff069bb7bd"} Oct 14 14:09:36 crc kubenswrapper[4725]: I1014 14:09:36.592533 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9b6t9" event={"ID":"a4d37cb7-85af-41d3-a3cf-5f25f518975e","Type":"ContainerDied","Data":"62d9afae3d4ce0b40e3de8742c715fbb92380405c3b8cd421ccd30614035754c"} Oct 14 14:09:36 crc kubenswrapper[4725]: I1014 14:09:36.592550 4725 scope.go:117] "RemoveContainer" containerID="91358d855990282429f4de44be7f479efbbf5cb77558edc19d04a4ff069bb7bd" Oct 14 14:09:36 crc kubenswrapper[4725]: I1014 14:09:36.592657 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9b6t9" Oct 14 14:09:36 crc kubenswrapper[4725]: I1014 14:09:36.612768 4725 scope.go:117] "RemoveContainer" containerID="b438809ad8996faa547004d28092c5a2b96c490eac806de8026072c8da66af35" Oct 14 14:09:36 crc kubenswrapper[4725]: I1014 14:09:36.631100 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9b6t9"] Oct 14 14:09:36 crc kubenswrapper[4725]: I1014 14:09:36.644707 4725 scope.go:117] "RemoveContainer" containerID="8be3e7fb117d1e47b85280b56b900e312b2dd2550be6a6ab04e76df2cfc34a56" Oct 14 14:09:36 crc kubenswrapper[4725]: I1014 14:09:36.647814 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9b6t9"] Oct 14 14:09:36 crc kubenswrapper[4725]: I1014 14:09:36.684980 4725 scope.go:117] "RemoveContainer" containerID="91358d855990282429f4de44be7f479efbbf5cb77558edc19d04a4ff069bb7bd" Oct 14 14:09:36 crc kubenswrapper[4725]: E1014 14:09:36.685491 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91358d855990282429f4de44be7f479efbbf5cb77558edc19d04a4ff069bb7bd\": container with ID starting with 91358d855990282429f4de44be7f479efbbf5cb77558edc19d04a4ff069bb7bd not found: ID does not exist" containerID="91358d855990282429f4de44be7f479efbbf5cb77558edc19d04a4ff069bb7bd" Oct 14 14:09:36 crc kubenswrapper[4725]: I1014 14:09:36.685540 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91358d855990282429f4de44be7f479efbbf5cb77558edc19d04a4ff069bb7bd"} err="failed to get container status \"91358d855990282429f4de44be7f479efbbf5cb77558edc19d04a4ff069bb7bd\": rpc error: code = NotFound desc = could not find container \"91358d855990282429f4de44be7f479efbbf5cb77558edc19d04a4ff069bb7bd\": container with ID starting with 91358d855990282429f4de44be7f479efbbf5cb77558edc19d04a4ff069bb7bd not found: ID does not exist" Oct 14 14:09:36 crc kubenswrapper[4725]: I1014 14:09:36.685573 4725 scope.go:117] "RemoveContainer" containerID="b438809ad8996faa547004d28092c5a2b96c490eac806de8026072c8da66af35" Oct 14 14:09:36 crc kubenswrapper[4725]: E1014 14:09:36.686051 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b438809ad8996faa547004d28092c5a2b96c490eac806de8026072c8da66af35\": container with ID starting with b438809ad8996faa547004d28092c5a2b96c490eac806de8026072c8da66af35 not found: ID does not exist" containerID="b438809ad8996faa547004d28092c5a2b96c490eac806de8026072c8da66af35" Oct 14 14:09:36 crc kubenswrapper[4725]: I1014 14:09:36.686090 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b438809ad8996faa547004d28092c5a2b96c490eac806de8026072c8da66af35"} err="failed to get container status \"b438809ad8996faa547004d28092c5a2b96c490eac806de8026072c8da66af35\": rpc error: code = NotFound desc = could not find container \"b438809ad8996faa547004d28092c5a2b96c490eac806de8026072c8da66af35\": container with ID starting with b438809ad8996faa547004d28092c5a2b96c490eac806de8026072c8da66af35 not found: ID does not exist" Oct 14 14:09:36 crc kubenswrapper[4725]: I1014 14:09:36.686120 4725 scope.go:117] "RemoveContainer" containerID="8be3e7fb117d1e47b85280b56b900e312b2dd2550be6a6ab04e76df2cfc34a56" Oct 14 14:09:36 crc kubenswrapper[4725]: E1014 14:09:36.686384 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8be3e7fb117d1e47b85280b56b900e312b2dd2550be6a6ab04e76df2cfc34a56\": container with ID starting with 8be3e7fb117d1e47b85280b56b900e312b2dd2550be6a6ab04e76df2cfc34a56 not found: ID does not exist" containerID="8be3e7fb117d1e47b85280b56b900e312b2dd2550be6a6ab04e76df2cfc34a56" Oct 14 14:09:36 crc kubenswrapper[4725]: I1014 14:09:36.686425 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8be3e7fb117d1e47b85280b56b900e312b2dd2550be6a6ab04e76df2cfc34a56"} err="failed to get container status \"8be3e7fb117d1e47b85280b56b900e312b2dd2550be6a6ab04e76df2cfc34a56\": rpc error: code = NotFound desc = could not find container \"8be3e7fb117d1e47b85280b56b900e312b2dd2550be6a6ab04e76df2cfc34a56\": container with ID starting with 8be3e7fb117d1e47b85280b56b900e312b2dd2550be6a6ab04e76df2cfc34a56 not found: ID does not exist" Oct 14 14:09:37 crc kubenswrapper[4725]: I1014 14:09:37.932882 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4d37cb7-85af-41d3-a3cf-5f25f518975e" path="/var/lib/kubelet/pods/a4d37cb7-85af-41d3-a3cf-5f25f518975e/volumes" Oct 14 14:11:16 crc kubenswrapper[4725]: I1014 14:11:16.577913 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hp9lt"] Oct 14 14:11:16 crc kubenswrapper[4725]: E1014 14:11:16.578918 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4d37cb7-85af-41d3-a3cf-5f25f518975e" containerName="registry-server" Oct 14 14:11:16 crc kubenswrapper[4725]: I1014 14:11:16.578932 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4d37cb7-85af-41d3-a3cf-5f25f518975e" containerName="registry-server" Oct 14 14:11:16 crc kubenswrapper[4725]: E1014 14:11:16.578953 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4d37cb7-85af-41d3-a3cf-5f25f518975e" containerName="extract-utilities" Oct 14 14:11:16 crc kubenswrapper[4725]: I1014 14:11:16.578959 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4d37cb7-85af-41d3-a3cf-5f25f518975e" containerName="extract-utilities" Oct 14 14:11:16 crc kubenswrapper[4725]: E1014 14:11:16.578971 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4d37cb7-85af-41d3-a3cf-5f25f518975e" containerName="extract-content" Oct 14 14:11:16 crc kubenswrapper[4725]: I1014 14:11:16.578978 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4d37cb7-85af-41d3-a3cf-5f25f518975e" containerName="extract-content" Oct 14 14:11:16 crc kubenswrapper[4725]: I1014 14:11:16.579158 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4d37cb7-85af-41d3-a3cf-5f25f518975e" containerName="registry-server" Oct 14 14:11:16 crc kubenswrapper[4725]: I1014 14:11:16.580651 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hp9lt" Oct 14 14:11:16 crc kubenswrapper[4725]: I1014 14:11:16.601094 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hp9lt"] Oct 14 14:11:16 crc kubenswrapper[4725]: I1014 14:11:16.735210 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9947e89f-80f6-4afa-ae73-1b260086d8c3-utilities\") pod \"community-operators-hp9lt\" (UID: \"9947e89f-80f6-4afa-ae73-1b260086d8c3\") " pod="openshift-marketplace/community-operators-hp9lt" Oct 14 14:11:16 crc kubenswrapper[4725]: I1014 14:11:16.735311 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9947e89f-80f6-4afa-ae73-1b260086d8c3-catalog-content\") pod \"community-operators-hp9lt\" (UID: \"9947e89f-80f6-4afa-ae73-1b260086d8c3\") " pod="openshift-marketplace/community-operators-hp9lt" Oct 14 14:11:16 crc kubenswrapper[4725]: I1014 14:11:16.735401 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r54n\" (UniqueName: \"kubernetes.io/projected/9947e89f-80f6-4afa-ae73-1b260086d8c3-kube-api-access-2r54n\") pod \"community-operators-hp9lt\" (UID: \"9947e89f-80f6-4afa-ae73-1b260086d8c3\") " pod="openshift-marketplace/community-operators-hp9lt" Oct 14 14:11:16 crc kubenswrapper[4725]: I1014 14:11:16.837156 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9947e89f-80f6-4afa-ae73-1b260086d8c3-utilities\") pod \"community-operators-hp9lt\" (UID: \"9947e89f-80f6-4afa-ae73-1b260086d8c3\") " pod="openshift-marketplace/community-operators-hp9lt" Oct 14 14:11:16 crc kubenswrapper[4725]: I1014 14:11:16.837271 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9947e89f-80f6-4afa-ae73-1b260086d8c3-catalog-content\") pod \"community-operators-hp9lt\" (UID: \"9947e89f-80f6-4afa-ae73-1b260086d8c3\") " pod="openshift-marketplace/community-operators-hp9lt" Oct 14 14:11:16 crc kubenswrapper[4725]: I1014 14:11:16.837322 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r54n\" (UniqueName: \"kubernetes.io/projected/9947e89f-80f6-4afa-ae73-1b260086d8c3-kube-api-access-2r54n\") pod \"community-operators-hp9lt\" (UID: \"9947e89f-80f6-4afa-ae73-1b260086d8c3\") " pod="openshift-marketplace/community-operators-hp9lt" Oct 14 14:11:16 crc kubenswrapper[4725]: I1014 14:11:16.837701 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9947e89f-80f6-4afa-ae73-1b260086d8c3-utilities\") pod \"community-operators-hp9lt\" (UID: \"9947e89f-80f6-4afa-ae73-1b260086d8c3\") " pod="openshift-marketplace/community-operators-hp9lt" Oct 14 14:11:16 crc kubenswrapper[4725]: I1014 14:11:16.837777 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9947e89f-80f6-4afa-ae73-1b260086d8c3-catalog-content\") pod \"community-operators-hp9lt\" (UID: \"9947e89f-80f6-4afa-ae73-1b260086d8c3\") " pod="openshift-marketplace/community-operators-hp9lt" Oct 14 14:11:16 crc kubenswrapper[4725]: I1014 14:11:16.865372 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r54n\" (UniqueName: \"kubernetes.io/projected/9947e89f-80f6-4afa-ae73-1b260086d8c3-kube-api-access-2r54n\") pod \"community-operators-hp9lt\" (UID: \"9947e89f-80f6-4afa-ae73-1b260086d8c3\") " pod="openshift-marketplace/community-operators-hp9lt" Oct 14 14:11:16 crc kubenswrapper[4725]: I1014 14:11:16.909132 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hp9lt" Oct 14 14:11:17 crc kubenswrapper[4725]: I1014 14:11:17.449256 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hp9lt"] Oct 14 14:11:17 crc kubenswrapper[4725]: I1014 14:11:17.549345 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hp9lt" event={"ID":"9947e89f-80f6-4afa-ae73-1b260086d8c3","Type":"ContainerStarted","Data":"20438106a75cba37a40f74db6d3cfec6e417b94243af5239080869f3bb977ec5"} Oct 14 14:11:18 crc kubenswrapper[4725]: I1014 14:11:18.562894 4725 generic.go:334] "Generic (PLEG): container finished" podID="9947e89f-80f6-4afa-ae73-1b260086d8c3" containerID="2ac0b879664c33f3f4934568402048e87bc0d218ed8ca3e0d9053df7fbb0e1e5" exitCode=0 Oct 14 14:11:18 crc kubenswrapper[4725]: I1014 14:11:18.563018 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hp9lt" event={"ID":"9947e89f-80f6-4afa-ae73-1b260086d8c3","Type":"ContainerDied","Data":"2ac0b879664c33f3f4934568402048e87bc0d218ed8ca3e0d9053df7fbb0e1e5"} Oct 14 14:11:18 crc kubenswrapper[4725]: I1014 14:11:18.565510 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 14:11:19 crc kubenswrapper[4725]: I1014 14:11:19.573340 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hp9lt" event={"ID":"9947e89f-80f6-4afa-ae73-1b260086d8c3","Type":"ContainerStarted","Data":"c438db528cd556852c79e9b1a435151c1f09e35be299b9e476e06e8883164b44"} Oct 14 14:11:20 crc kubenswrapper[4725]: I1014 14:11:20.586322 4725 generic.go:334] "Generic (PLEG): container finished" podID="9947e89f-80f6-4afa-ae73-1b260086d8c3" containerID="c438db528cd556852c79e9b1a435151c1f09e35be299b9e476e06e8883164b44" exitCode=0 Oct 14 14:11:20 crc kubenswrapper[4725]: I1014 14:11:20.586385 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hp9lt" event={"ID":"9947e89f-80f6-4afa-ae73-1b260086d8c3","Type":"ContainerDied","Data":"c438db528cd556852c79e9b1a435151c1f09e35be299b9e476e06e8883164b44"} Oct 14 14:11:21 crc kubenswrapper[4725]: I1014 14:11:21.611418 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hp9lt" event={"ID":"9947e89f-80f6-4afa-ae73-1b260086d8c3","Type":"ContainerStarted","Data":"28226075466cf5e3632d853e553f08ae3266ea0a3dd5c9ad27b6b9eb58eb6e98"} Oct 14 14:11:21 crc kubenswrapper[4725]: I1014 14:11:21.638575 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hp9lt" podStartSLOduration=3.085490743 podStartE2EDuration="5.6385555s" podCreationTimestamp="2025-10-14 14:11:16 +0000 UTC" firstStartedPulling="2025-10-14 14:11:18.565273931 +0000 UTC m=+3395.413708740" lastFinishedPulling="2025-10-14 14:11:21.118338648 +0000 UTC m=+3397.966773497" observedRunningTime="2025-10-14 14:11:21.630349588 +0000 UTC m=+3398.478784407" watchObservedRunningTime="2025-10-14 14:11:21.6385555 +0000 UTC m=+3398.486990309" Oct 14 14:11:26 crc kubenswrapper[4725]: I1014 14:11:26.909823 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hp9lt" Oct 14 14:11:26 crc kubenswrapper[4725]: I1014 14:11:26.910398 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hp9lt" Oct 14 14:11:26 crc kubenswrapper[4725]: I1014 14:11:26.964679 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hp9lt" Oct 14 14:11:27 crc kubenswrapper[4725]: I1014 14:11:27.708252 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hp9lt" Oct 14 14:11:27 crc kubenswrapper[4725]: I1014 14:11:27.755003 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hp9lt"] Oct 14 14:11:29 crc kubenswrapper[4725]: I1014 14:11:29.679350 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hp9lt" podUID="9947e89f-80f6-4afa-ae73-1b260086d8c3" containerName="registry-server" containerID="cri-o://28226075466cf5e3632d853e553f08ae3266ea0a3dd5c9ad27b6b9eb58eb6e98" gracePeriod=2 Oct 14 14:11:30 crc kubenswrapper[4725]: I1014 14:11:30.692691 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hp9lt" event={"ID":"9947e89f-80f6-4afa-ae73-1b260086d8c3","Type":"ContainerDied","Data":"28226075466cf5e3632d853e553f08ae3266ea0a3dd5c9ad27b6b9eb58eb6e98"} Oct 14 14:11:30 crc kubenswrapper[4725]: I1014 14:11:30.692570 4725 generic.go:334] "Generic (PLEG): container finished" podID="9947e89f-80f6-4afa-ae73-1b260086d8c3" containerID="28226075466cf5e3632d853e553f08ae3266ea0a3dd5c9ad27b6b9eb58eb6e98" exitCode=0 Oct 14 14:11:30 crc kubenswrapper[4725]: I1014 14:11:30.694275 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hp9lt" event={"ID":"9947e89f-80f6-4afa-ae73-1b260086d8c3","Type":"ContainerDied","Data":"20438106a75cba37a40f74db6d3cfec6e417b94243af5239080869f3bb977ec5"} Oct 14 14:11:30 crc kubenswrapper[4725]: I1014 14:11:30.694299 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20438106a75cba37a40f74db6d3cfec6e417b94243af5239080869f3bb977ec5" Oct 14 14:11:30 crc kubenswrapper[4725]: I1014 14:11:30.706927 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hp9lt" Oct 14 14:11:30 crc kubenswrapper[4725]: I1014 14:11:30.826694 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9947e89f-80f6-4afa-ae73-1b260086d8c3-catalog-content\") pod \"9947e89f-80f6-4afa-ae73-1b260086d8c3\" (UID: \"9947e89f-80f6-4afa-ae73-1b260086d8c3\") " Oct 14 14:11:30 crc kubenswrapper[4725]: I1014 14:11:30.826828 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2r54n\" (UniqueName: \"kubernetes.io/projected/9947e89f-80f6-4afa-ae73-1b260086d8c3-kube-api-access-2r54n\") pod \"9947e89f-80f6-4afa-ae73-1b260086d8c3\" (UID: \"9947e89f-80f6-4afa-ae73-1b260086d8c3\") " Oct 14 14:11:30 crc kubenswrapper[4725]: I1014 14:11:30.826953 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9947e89f-80f6-4afa-ae73-1b260086d8c3-utilities\") pod \"9947e89f-80f6-4afa-ae73-1b260086d8c3\" (UID: \"9947e89f-80f6-4afa-ae73-1b260086d8c3\") " Oct 14 14:11:30 crc kubenswrapper[4725]: I1014 14:11:30.828397 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9947e89f-80f6-4afa-ae73-1b260086d8c3-utilities" (OuterVolumeSpecName: "utilities") pod "9947e89f-80f6-4afa-ae73-1b260086d8c3" (UID: "9947e89f-80f6-4afa-ae73-1b260086d8c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:11:30 crc kubenswrapper[4725]: I1014 14:11:30.836209 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9947e89f-80f6-4afa-ae73-1b260086d8c3-kube-api-access-2r54n" (OuterVolumeSpecName: "kube-api-access-2r54n") pod "9947e89f-80f6-4afa-ae73-1b260086d8c3" (UID: "9947e89f-80f6-4afa-ae73-1b260086d8c3"). InnerVolumeSpecName "kube-api-access-2r54n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:11:30 crc kubenswrapper[4725]: I1014 14:11:30.891513 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9947e89f-80f6-4afa-ae73-1b260086d8c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9947e89f-80f6-4afa-ae73-1b260086d8c3" (UID: "9947e89f-80f6-4afa-ae73-1b260086d8c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:11:30 crc kubenswrapper[4725]: I1014 14:11:30.929105 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2r54n\" (UniqueName: \"kubernetes.io/projected/9947e89f-80f6-4afa-ae73-1b260086d8c3-kube-api-access-2r54n\") on node \"crc\" DevicePath \"\"" Oct 14 14:11:30 crc kubenswrapper[4725]: I1014 14:11:30.929144 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9947e89f-80f6-4afa-ae73-1b260086d8c3-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 14:11:30 crc kubenswrapper[4725]: I1014 14:11:30.929155 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9947e89f-80f6-4afa-ae73-1b260086d8c3-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 14:11:31 crc kubenswrapper[4725]: I1014 14:11:31.702129 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hp9lt" Oct 14 14:11:31 crc kubenswrapper[4725]: I1014 14:11:31.738588 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hp9lt"] Oct 14 14:11:31 crc kubenswrapper[4725]: I1014 14:11:31.748951 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hp9lt"] Oct 14 14:11:31 crc kubenswrapper[4725]: I1014 14:11:31.932238 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9947e89f-80f6-4afa-ae73-1b260086d8c3" path="/var/lib/kubelet/pods/9947e89f-80f6-4afa-ae73-1b260086d8c3/volumes" Oct 14 14:11:32 crc kubenswrapper[4725]: I1014 14:11:32.521175 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 14:11:32 crc kubenswrapper[4725]: I1014 14:11:32.521263 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 14:12:02 crc kubenswrapper[4725]: I1014 14:12:02.520929 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 14:12:02 crc kubenswrapper[4725]: I1014 14:12:02.521607 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 14:12:32 crc kubenswrapper[4725]: I1014 14:12:32.520189 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 14:12:32 crc kubenswrapper[4725]: I1014 14:12:32.520799 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 14:12:32 crc kubenswrapper[4725]: I1014 14:12:32.520867 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" Oct 14 14:12:32 crc kubenswrapper[4725]: I1014 14:12:32.521811 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6fee48a36d192c8eadf8b48d4f629647a2d4809a7741f2119d1d453cb7ebd3e9"} pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 14:12:32 crc kubenswrapper[4725]: I1014 14:12:32.521869 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" containerID="cri-o://6fee48a36d192c8eadf8b48d4f629647a2d4809a7741f2119d1d453cb7ebd3e9" gracePeriod=600 Oct 14 14:12:33 crc kubenswrapper[4725]: I1014 14:12:33.276819 4725 generic.go:334] "Generic (PLEG): container finished" podID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerID="6fee48a36d192c8eadf8b48d4f629647a2d4809a7741f2119d1d453cb7ebd3e9" exitCode=0 Oct 14 14:12:33 crc kubenswrapper[4725]: I1014 14:12:33.276885 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" event={"ID":"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c","Type":"ContainerDied","Data":"6fee48a36d192c8eadf8b48d4f629647a2d4809a7741f2119d1d453cb7ebd3e9"} Oct 14 14:12:33 crc kubenswrapper[4725]: I1014 14:12:33.277230 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" event={"ID":"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c","Type":"ContainerStarted","Data":"1e2cf79492a165ced9cc9a27ff00cb3288ac5a0abfa5f696ffa2da70ed76d129"} Oct 14 14:12:33 crc kubenswrapper[4725]: I1014 14:12:33.277267 4725 scope.go:117] "RemoveContainer" containerID="2be0c39729f010a9af8428f8e57dd6b737cbe9299288deecdc98318e8f6194f2" Oct 14 14:12:35 crc kubenswrapper[4725]: I1014 14:12:35.297705 4725 generic.go:334] "Generic (PLEG): container finished" podID="148578f5-c02c-4ef4-a214-87532b2d29e2" containerID="b373f96046d1c755bd77cb78a7dcc0b85b9695d381bcf0c679c51fef0efe67a8" exitCode=0 Oct 14 14:12:35 crc kubenswrapper[4725]: I1014 14:12:35.297807 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"148578f5-c02c-4ef4-a214-87532b2d29e2","Type":"ContainerDied","Data":"b373f96046d1c755bd77cb78a7dcc0b85b9695d381bcf0c679c51fef0efe67a8"} Oct 14 14:12:36 crc kubenswrapper[4725]: I1014 14:12:36.632649 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 14 14:12:36 crc kubenswrapper[4725]: I1014 14:12:36.748417 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/148578f5-c02c-4ef4-a214-87532b2d29e2-ssh-key\") pod \"148578f5-c02c-4ef4-a214-87532b2d29e2\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " Oct 14 14:12:36 crc kubenswrapper[4725]: I1014 14:12:36.748591 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/148578f5-c02c-4ef4-a214-87532b2d29e2-config-data\") pod \"148578f5-c02c-4ef4-a214-87532b2d29e2\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " Oct 14 14:12:36 crc kubenswrapper[4725]: I1014 14:12:36.748750 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6xwv\" (UniqueName: \"kubernetes.io/projected/148578f5-c02c-4ef4-a214-87532b2d29e2-kube-api-access-f6xwv\") pod \"148578f5-c02c-4ef4-a214-87532b2d29e2\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " Oct 14 14:12:36 crc kubenswrapper[4725]: I1014 14:12:36.748811 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/148578f5-c02c-4ef4-a214-87532b2d29e2-ca-certs\") pod \"148578f5-c02c-4ef4-a214-87532b2d29e2\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " Oct 14 14:12:36 crc kubenswrapper[4725]: I1014 14:12:36.748865 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/148578f5-c02c-4ef4-a214-87532b2d29e2-test-operator-ephemeral-temporary\") pod \"148578f5-c02c-4ef4-a214-87532b2d29e2\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " Oct 14 14:12:36 crc kubenswrapper[4725]: I1014 14:12:36.748913 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"148578f5-c02c-4ef4-a214-87532b2d29e2\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " Oct 14 14:12:36 crc kubenswrapper[4725]: I1014 14:12:36.748947 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/148578f5-c02c-4ef4-a214-87532b2d29e2-openstack-config-secret\") pod \"148578f5-c02c-4ef4-a214-87532b2d29e2\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " Oct 14 14:12:36 crc kubenswrapper[4725]: I1014 14:12:36.749035 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/148578f5-c02c-4ef4-a214-87532b2d29e2-test-operator-ephemeral-workdir\") pod \"148578f5-c02c-4ef4-a214-87532b2d29e2\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " Oct 14 14:12:36 crc kubenswrapper[4725]: I1014 14:12:36.749135 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/148578f5-c02c-4ef4-a214-87532b2d29e2-openstack-config\") pod \"148578f5-c02c-4ef4-a214-87532b2d29e2\" (UID: \"148578f5-c02c-4ef4-a214-87532b2d29e2\") " Oct 14 14:12:36 crc kubenswrapper[4725]: I1014 14:12:36.750104 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/148578f5-c02c-4ef4-a214-87532b2d29e2-config-data" (OuterVolumeSpecName: "config-data") pod "148578f5-c02c-4ef4-a214-87532b2d29e2" (UID: "148578f5-c02c-4ef4-a214-87532b2d29e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:12:36 crc kubenswrapper[4725]: I1014 14:12:36.750739 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/148578f5-c02c-4ef4-a214-87532b2d29e2-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "148578f5-c02c-4ef4-a214-87532b2d29e2" (UID: "148578f5-c02c-4ef4-a214-87532b2d29e2"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:12:36 crc kubenswrapper[4725]: I1014 14:12:36.754984 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "test-operator-logs") pod "148578f5-c02c-4ef4-a214-87532b2d29e2" (UID: "148578f5-c02c-4ef4-a214-87532b2d29e2"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 14 14:12:36 crc kubenswrapper[4725]: I1014 14:12:36.755188 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/148578f5-c02c-4ef4-a214-87532b2d29e2-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "148578f5-c02c-4ef4-a214-87532b2d29e2" (UID: "148578f5-c02c-4ef4-a214-87532b2d29e2"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:12:36 crc kubenswrapper[4725]: I1014 14:12:36.755391 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/148578f5-c02c-4ef4-a214-87532b2d29e2-kube-api-access-f6xwv" (OuterVolumeSpecName: "kube-api-access-f6xwv") pod "148578f5-c02c-4ef4-a214-87532b2d29e2" (UID: "148578f5-c02c-4ef4-a214-87532b2d29e2"). InnerVolumeSpecName "kube-api-access-f6xwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:12:36 crc kubenswrapper[4725]: I1014 14:12:36.778382 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/148578f5-c02c-4ef4-a214-87532b2d29e2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "148578f5-c02c-4ef4-a214-87532b2d29e2" (UID: "148578f5-c02c-4ef4-a214-87532b2d29e2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:12:36 crc kubenswrapper[4725]: I1014 14:12:36.780344 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/148578f5-c02c-4ef4-a214-87532b2d29e2-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "148578f5-c02c-4ef4-a214-87532b2d29e2" (UID: "148578f5-c02c-4ef4-a214-87532b2d29e2"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:12:36 crc kubenswrapper[4725]: I1014 14:12:36.794214 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/148578f5-c02c-4ef4-a214-87532b2d29e2-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "148578f5-c02c-4ef4-a214-87532b2d29e2" (UID: "148578f5-c02c-4ef4-a214-87532b2d29e2"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:12:36 crc kubenswrapper[4725]: I1014 14:12:36.805223 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/148578f5-c02c-4ef4-a214-87532b2d29e2-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "148578f5-c02c-4ef4-a214-87532b2d29e2" (UID: "148578f5-c02c-4ef4-a214-87532b2d29e2"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:12:36 crc kubenswrapper[4725]: I1014 14:12:36.851922 4725 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/148578f5-c02c-4ef4-a214-87532b2d29e2-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Oct 14 14:12:36 crc kubenswrapper[4725]: I1014 14:12:36.851957 4725 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/148578f5-c02c-4ef4-a214-87532b2d29e2-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 14 14:12:36 crc kubenswrapper[4725]: I1014 14:12:36.851968 4725 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/148578f5-c02c-4ef4-a214-87532b2d29e2-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 14 14:12:36 crc kubenswrapper[4725]: I1014 14:12:36.851977 4725 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/148578f5-c02c-4ef4-a214-87532b2d29e2-config-data\") on node \"crc\" DevicePath \"\"" Oct 14 14:12:36 crc kubenswrapper[4725]: I1014 14:12:36.851984 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6xwv\" (UniqueName: \"kubernetes.io/projected/148578f5-c02c-4ef4-a214-87532b2d29e2-kube-api-access-f6xwv\") on node \"crc\" DevicePath \"\"" Oct 14 14:12:36 crc kubenswrapper[4725]: I1014 14:12:36.852008 4725 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/148578f5-c02c-4ef4-a214-87532b2d29e2-ca-certs\") on node \"crc\" DevicePath \"\"" Oct 14 14:12:36 crc kubenswrapper[4725]: I1014 14:12:36.852019 4725 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/148578f5-c02c-4ef4-a214-87532b2d29e2-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Oct 14 14:12:36 crc kubenswrapper[4725]: I1014 14:12:36.852056 4725 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Oct 14 14:12:36 crc kubenswrapper[4725]: I1014 14:12:36.852067 4725 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/148578f5-c02c-4ef4-a214-87532b2d29e2-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 14 14:12:36 crc kubenswrapper[4725]: I1014 14:12:36.874520 4725 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Oct 14 14:12:36 crc kubenswrapper[4725]: I1014 14:12:36.954263 4725 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Oct 14 14:12:37 crc kubenswrapper[4725]: I1014 14:12:37.320652 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"148578f5-c02c-4ef4-a214-87532b2d29e2","Type":"ContainerDied","Data":"eeccb0cd60a7f45815cf1e7d03ece365809e26f81cd93ec542fb0616312f6ff5"} Oct 14 14:12:37 crc kubenswrapper[4725]: I1014 14:12:37.321106 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eeccb0cd60a7f45815cf1e7d03ece365809e26f81cd93ec542fb0616312f6ff5" Oct 14 14:12:37 crc kubenswrapper[4725]: I1014 14:12:37.320729 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 14 14:12:39 crc kubenswrapper[4725]: I1014 14:12:39.400116 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 14 14:12:39 crc kubenswrapper[4725]: E1014 14:12:39.401783 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9947e89f-80f6-4afa-ae73-1b260086d8c3" containerName="registry-server" Oct 14 14:12:39 crc kubenswrapper[4725]: I1014 14:12:39.401816 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9947e89f-80f6-4afa-ae73-1b260086d8c3" containerName="registry-server" Oct 14 14:12:39 crc kubenswrapper[4725]: E1014 14:12:39.401836 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9947e89f-80f6-4afa-ae73-1b260086d8c3" containerName="extract-content" Oct 14 14:12:39 crc kubenswrapper[4725]: I1014 14:12:39.401845 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9947e89f-80f6-4afa-ae73-1b260086d8c3" containerName="extract-content" Oct 14 14:12:39 crc kubenswrapper[4725]: E1014 14:12:39.401858 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="148578f5-c02c-4ef4-a214-87532b2d29e2" containerName="tempest-tests-tempest-tests-runner" Oct 14 14:12:39 crc kubenswrapper[4725]: I1014 14:12:39.401864 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="148578f5-c02c-4ef4-a214-87532b2d29e2" containerName="tempest-tests-tempest-tests-runner" Oct 14 14:12:39 crc kubenswrapper[4725]: E1014 14:12:39.401899 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9947e89f-80f6-4afa-ae73-1b260086d8c3" containerName="extract-utilities" Oct 14 14:12:39 crc kubenswrapper[4725]: I1014 14:12:39.401905 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9947e89f-80f6-4afa-ae73-1b260086d8c3" containerName="extract-utilities" Oct 14 14:12:39 crc kubenswrapper[4725]: I1014 14:12:39.402085 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="148578f5-c02c-4ef4-a214-87532b2d29e2" containerName="tempest-tests-tempest-tests-runner" Oct 14 14:12:39 crc kubenswrapper[4725]: I1014 14:12:39.402107 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9947e89f-80f6-4afa-ae73-1b260086d8c3" containerName="registry-server" Oct 14 14:12:39 crc kubenswrapper[4725]: I1014 14:12:39.402764 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 14:12:39 crc kubenswrapper[4725]: I1014 14:12:39.405680 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-ghhdq" Oct 14 14:12:39 crc kubenswrapper[4725]: I1014 14:12:39.413067 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 14 14:12:39 crc kubenswrapper[4725]: I1014 14:12:39.507173 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a47eaf15-990a-4f7e-aaec-7fc90ee8fdfd\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 14:12:39 crc kubenswrapper[4725]: I1014 14:12:39.507260 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfrqv\" (UniqueName: \"kubernetes.io/projected/a47eaf15-990a-4f7e-aaec-7fc90ee8fdfd-kube-api-access-lfrqv\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a47eaf15-990a-4f7e-aaec-7fc90ee8fdfd\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 14:12:39 crc kubenswrapper[4725]: I1014 14:12:39.609359 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a47eaf15-990a-4f7e-aaec-7fc90ee8fdfd\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 14:12:39 crc kubenswrapper[4725]: I1014 14:12:39.609502 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfrqv\" (UniqueName: \"kubernetes.io/projected/a47eaf15-990a-4f7e-aaec-7fc90ee8fdfd-kube-api-access-lfrqv\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a47eaf15-990a-4f7e-aaec-7fc90ee8fdfd\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 14:12:39 crc kubenswrapper[4725]: I1014 14:12:39.610280 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a47eaf15-990a-4f7e-aaec-7fc90ee8fdfd\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 14:12:39 crc kubenswrapper[4725]: I1014 14:12:39.632019 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfrqv\" (UniqueName: \"kubernetes.io/projected/a47eaf15-990a-4f7e-aaec-7fc90ee8fdfd-kube-api-access-lfrqv\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a47eaf15-990a-4f7e-aaec-7fc90ee8fdfd\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 14:12:39 crc kubenswrapper[4725]: I1014 14:12:39.662138 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"a47eaf15-990a-4f7e-aaec-7fc90ee8fdfd\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 14:12:39 crc kubenswrapper[4725]: I1014 14:12:39.723203 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 14 14:12:40 crc kubenswrapper[4725]: I1014 14:12:40.166490 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 14 14:12:40 crc kubenswrapper[4725]: I1014 14:12:40.351879 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"a47eaf15-990a-4f7e-aaec-7fc90ee8fdfd","Type":"ContainerStarted","Data":"a2da5fde5e5f74a8ea346682aff0cc4da15da6b2b0c3322814548a2b3f7c5f72"} Oct 14 14:12:41 crc kubenswrapper[4725]: I1014 14:12:41.361146 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"a47eaf15-990a-4f7e-aaec-7fc90ee8fdfd","Type":"ContainerStarted","Data":"ec09cbfeff28a075ba7c0373ef8a422e8ee7c2b055a3db3e1291cab29fdca717"} Oct 14 14:12:41 crc kubenswrapper[4725]: I1014 14:12:41.381016 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.519453516 podStartE2EDuration="2.381000696s" podCreationTimestamp="2025-10-14 14:12:39 +0000 UTC" firstStartedPulling="2025-10-14 14:12:40.165930148 +0000 UTC m=+3477.014364957" lastFinishedPulling="2025-10-14 14:12:41.027477328 +0000 UTC m=+3477.875912137" observedRunningTime="2025-10-14 14:12:41.374203534 +0000 UTC m=+3478.222638343" watchObservedRunningTime="2025-10-14 14:12:41.381000696 +0000 UTC m=+3478.229435505" Oct 14 14:12:58 crc kubenswrapper[4725]: I1014 14:12:58.611357 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mwvps/must-gather-p7fzg"] Oct 14 14:12:58 crc kubenswrapper[4725]: I1014 14:12:58.613711 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mwvps/must-gather-p7fzg" Oct 14 14:12:58 crc kubenswrapper[4725]: I1014 14:12:58.618192 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-mwvps"/"openshift-service-ca.crt" Oct 14 14:12:58 crc kubenswrapper[4725]: I1014 14:12:58.618883 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-mwvps"/"kube-root-ca.crt" Oct 14 14:12:58 crc kubenswrapper[4725]: I1014 14:12:58.637337 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mwvps/must-gather-p7fzg"] Oct 14 14:12:58 crc kubenswrapper[4725]: I1014 14:12:58.698023 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52vtp\" (UniqueName: \"kubernetes.io/projected/976babc1-ded9-4980-92a2-0354b78672eb-kube-api-access-52vtp\") pod \"must-gather-p7fzg\" (UID: \"976babc1-ded9-4980-92a2-0354b78672eb\") " pod="openshift-must-gather-mwvps/must-gather-p7fzg" Oct 14 14:12:58 crc kubenswrapper[4725]: I1014 14:12:58.698345 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/976babc1-ded9-4980-92a2-0354b78672eb-must-gather-output\") pod \"must-gather-p7fzg\" (UID: \"976babc1-ded9-4980-92a2-0354b78672eb\") " pod="openshift-must-gather-mwvps/must-gather-p7fzg" Oct 14 14:12:58 crc kubenswrapper[4725]: I1014 14:12:58.799829 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52vtp\" (UniqueName: \"kubernetes.io/projected/976babc1-ded9-4980-92a2-0354b78672eb-kube-api-access-52vtp\") pod \"must-gather-p7fzg\" (UID: \"976babc1-ded9-4980-92a2-0354b78672eb\") " pod="openshift-must-gather-mwvps/must-gather-p7fzg" Oct 14 14:12:58 crc kubenswrapper[4725]: I1014 14:12:58.799921 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/976babc1-ded9-4980-92a2-0354b78672eb-must-gather-output\") pod \"must-gather-p7fzg\" (UID: \"976babc1-ded9-4980-92a2-0354b78672eb\") " pod="openshift-must-gather-mwvps/must-gather-p7fzg" Oct 14 14:12:58 crc kubenswrapper[4725]: I1014 14:12:58.800490 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/976babc1-ded9-4980-92a2-0354b78672eb-must-gather-output\") pod \"must-gather-p7fzg\" (UID: \"976babc1-ded9-4980-92a2-0354b78672eb\") " pod="openshift-must-gather-mwvps/must-gather-p7fzg" Oct 14 14:12:58 crc kubenswrapper[4725]: I1014 14:12:58.819721 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52vtp\" (UniqueName: \"kubernetes.io/projected/976babc1-ded9-4980-92a2-0354b78672eb-kube-api-access-52vtp\") pod \"must-gather-p7fzg\" (UID: \"976babc1-ded9-4980-92a2-0354b78672eb\") " pod="openshift-must-gather-mwvps/must-gather-p7fzg" Oct 14 14:12:58 crc kubenswrapper[4725]: I1014 14:12:58.932369 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mwvps/must-gather-p7fzg" Oct 14 14:12:59 crc kubenswrapper[4725]: I1014 14:12:59.474428 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mwvps/must-gather-p7fzg"] Oct 14 14:12:59 crc kubenswrapper[4725]: I1014 14:12:59.529766 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mwvps/must-gather-p7fzg" event={"ID":"976babc1-ded9-4980-92a2-0354b78672eb","Type":"ContainerStarted","Data":"d7eda3325b04b30313fcd1ca5fc0ca3a08ea7a09e121538b872e5e757f9b2541"} Oct 14 14:13:03 crc kubenswrapper[4725]: I1014 14:13:03.577945 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mwvps/must-gather-p7fzg" event={"ID":"976babc1-ded9-4980-92a2-0354b78672eb","Type":"ContainerStarted","Data":"711b81e3f3c4941217d9276229f64f152ea57796f16e0b28f951c98a2c1af077"} Oct 14 14:13:04 crc kubenswrapper[4725]: I1014 14:13:04.596320 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mwvps/must-gather-p7fzg" event={"ID":"976babc1-ded9-4980-92a2-0354b78672eb","Type":"ContainerStarted","Data":"e3b90394f98758c64dea80f2c4e4aab14cdf79155dc9fac12bfb41d530bbd34d"} Oct 14 14:13:04 crc kubenswrapper[4725]: I1014 14:13:04.610375 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mwvps/must-gather-p7fzg" podStartSLOduration=2.883688764 podStartE2EDuration="6.61035355s" podCreationTimestamp="2025-10-14 14:12:58 +0000 UTC" firstStartedPulling="2025-10-14 14:12:59.473721364 +0000 UTC m=+3496.322156193" lastFinishedPulling="2025-10-14 14:13:03.20038616 +0000 UTC m=+3500.048820979" observedRunningTime="2025-10-14 14:13:04.609417475 +0000 UTC m=+3501.457852284" watchObservedRunningTime="2025-10-14 14:13:04.61035355 +0000 UTC m=+3501.458788349" Oct 14 14:13:06 crc kubenswrapper[4725]: I1014 14:13:06.909198 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mwvps/crc-debug-x5m98"] Oct 14 14:13:06 crc kubenswrapper[4725]: I1014 14:13:06.912118 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mwvps/crc-debug-x5m98" Oct 14 14:13:06 crc kubenswrapper[4725]: I1014 14:13:06.917635 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-mwvps"/"default-dockercfg-hbbt8" Oct 14 14:13:07 crc kubenswrapper[4725]: I1014 14:13:07.060497 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f4dbd83-e9b7-4781-8f7f-079487bfb60f-host\") pod \"crc-debug-x5m98\" (UID: \"6f4dbd83-e9b7-4781-8f7f-079487bfb60f\") " pod="openshift-must-gather-mwvps/crc-debug-x5m98" Oct 14 14:13:07 crc kubenswrapper[4725]: I1014 14:13:07.061144 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc74q\" (UniqueName: \"kubernetes.io/projected/6f4dbd83-e9b7-4781-8f7f-079487bfb60f-kube-api-access-dc74q\") pod \"crc-debug-x5m98\" (UID: \"6f4dbd83-e9b7-4781-8f7f-079487bfb60f\") " pod="openshift-must-gather-mwvps/crc-debug-x5m98" Oct 14 14:13:07 crc kubenswrapper[4725]: I1014 14:13:07.162894 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc74q\" (UniqueName: \"kubernetes.io/projected/6f4dbd83-e9b7-4781-8f7f-079487bfb60f-kube-api-access-dc74q\") pod \"crc-debug-x5m98\" (UID: \"6f4dbd83-e9b7-4781-8f7f-079487bfb60f\") " pod="openshift-must-gather-mwvps/crc-debug-x5m98" Oct 14 14:13:07 crc kubenswrapper[4725]: I1014 14:13:07.162980 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f4dbd83-e9b7-4781-8f7f-079487bfb60f-host\") pod \"crc-debug-x5m98\" (UID: \"6f4dbd83-e9b7-4781-8f7f-079487bfb60f\") " pod="openshift-must-gather-mwvps/crc-debug-x5m98" Oct 14 14:13:07 crc kubenswrapper[4725]: I1014 14:13:07.163153 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f4dbd83-e9b7-4781-8f7f-079487bfb60f-host\") pod \"crc-debug-x5m98\" (UID: \"6f4dbd83-e9b7-4781-8f7f-079487bfb60f\") " pod="openshift-must-gather-mwvps/crc-debug-x5m98" Oct 14 14:13:07 crc kubenswrapper[4725]: I1014 14:13:07.180905 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc74q\" (UniqueName: \"kubernetes.io/projected/6f4dbd83-e9b7-4781-8f7f-079487bfb60f-kube-api-access-dc74q\") pod \"crc-debug-x5m98\" (UID: \"6f4dbd83-e9b7-4781-8f7f-079487bfb60f\") " pod="openshift-must-gather-mwvps/crc-debug-x5m98" Oct 14 14:13:07 crc kubenswrapper[4725]: I1014 14:13:07.232008 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mwvps/crc-debug-x5m98" Oct 14 14:13:07 crc kubenswrapper[4725]: W1014 14:13:07.268246 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f4dbd83_e9b7_4781_8f7f_079487bfb60f.slice/crio-88c16a30e0b558ca23605c0bab5303c3b13fa4d155e3a1e5fdf5d73ff1c7d4f0 WatchSource:0}: Error finding container 88c16a30e0b558ca23605c0bab5303c3b13fa4d155e3a1e5fdf5d73ff1c7d4f0: Status 404 returned error can't find the container with id 88c16a30e0b558ca23605c0bab5303c3b13fa4d155e3a1e5fdf5d73ff1c7d4f0 Oct 14 14:13:07 crc kubenswrapper[4725]: I1014 14:13:07.637298 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mwvps/crc-debug-x5m98" event={"ID":"6f4dbd83-e9b7-4781-8f7f-079487bfb60f","Type":"ContainerStarted","Data":"88c16a30e0b558ca23605c0bab5303c3b13fa4d155e3a1e5fdf5d73ff1c7d4f0"} Oct 14 14:13:17 crc kubenswrapper[4725]: I1014 14:13:17.727910 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mwvps/crc-debug-x5m98" event={"ID":"6f4dbd83-e9b7-4781-8f7f-079487bfb60f","Type":"ContainerStarted","Data":"3fc2db4314f5110e6163825db8a8cc51adfa364d46e3b9a6da1deb9d4fb46fd9"} Oct 14 14:13:17 crc kubenswrapper[4725]: I1014 14:13:17.753792 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mwvps/crc-debug-x5m98" podStartSLOduration=2.135358933 podStartE2EDuration="11.753768136s" podCreationTimestamp="2025-10-14 14:13:06 +0000 UTC" firstStartedPulling="2025-10-14 14:13:07.279615838 +0000 UTC m=+3504.128050647" lastFinishedPulling="2025-10-14 14:13:16.898025041 +0000 UTC m=+3513.746459850" observedRunningTime="2025-10-14 14:13:17.74386118 +0000 UTC m=+3514.592295999" watchObservedRunningTime="2025-10-14 14:13:17.753768136 +0000 UTC m=+3514.602202965" Oct 14 14:13:57 crc kubenswrapper[4725]: I1014 14:13:57.065147 4725 generic.go:334] "Generic (PLEG): container finished" podID="6f4dbd83-e9b7-4781-8f7f-079487bfb60f" containerID="3fc2db4314f5110e6163825db8a8cc51adfa364d46e3b9a6da1deb9d4fb46fd9" exitCode=0 Oct 14 14:13:57 crc kubenswrapper[4725]: I1014 14:13:57.065208 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mwvps/crc-debug-x5m98" event={"ID":"6f4dbd83-e9b7-4781-8f7f-079487bfb60f","Type":"ContainerDied","Data":"3fc2db4314f5110e6163825db8a8cc51adfa364d46e3b9a6da1deb9d4fb46fd9"} Oct 14 14:13:58 crc kubenswrapper[4725]: I1014 14:13:58.182874 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mwvps/crc-debug-x5m98" Oct 14 14:13:58 crc kubenswrapper[4725]: I1014 14:13:58.228347 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mwvps/crc-debug-x5m98"] Oct 14 14:13:58 crc kubenswrapper[4725]: I1014 14:13:58.239405 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mwvps/crc-debug-x5m98"] Oct 14 14:13:58 crc kubenswrapper[4725]: I1014 14:13:58.253670 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc74q\" (UniqueName: \"kubernetes.io/projected/6f4dbd83-e9b7-4781-8f7f-079487bfb60f-kube-api-access-dc74q\") pod \"6f4dbd83-e9b7-4781-8f7f-079487bfb60f\" (UID: \"6f4dbd83-e9b7-4781-8f7f-079487bfb60f\") " Oct 14 14:13:58 crc kubenswrapper[4725]: I1014 14:13:58.253748 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f4dbd83-e9b7-4781-8f7f-079487bfb60f-host\") pod \"6f4dbd83-e9b7-4781-8f7f-079487bfb60f\" (UID: \"6f4dbd83-e9b7-4781-8f7f-079487bfb60f\") " Oct 14 14:13:58 crc kubenswrapper[4725]: I1014 14:13:58.254063 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f4dbd83-e9b7-4781-8f7f-079487bfb60f-host" (OuterVolumeSpecName: "host") pod "6f4dbd83-e9b7-4781-8f7f-079487bfb60f" (UID: "6f4dbd83-e9b7-4781-8f7f-079487bfb60f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 14:13:58 crc kubenswrapper[4725]: I1014 14:13:58.254315 4725 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f4dbd83-e9b7-4781-8f7f-079487bfb60f-host\") on node \"crc\" DevicePath \"\"" Oct 14 14:13:58 crc kubenswrapper[4725]: I1014 14:13:58.260063 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f4dbd83-e9b7-4781-8f7f-079487bfb60f-kube-api-access-dc74q" (OuterVolumeSpecName: "kube-api-access-dc74q") pod "6f4dbd83-e9b7-4781-8f7f-079487bfb60f" (UID: "6f4dbd83-e9b7-4781-8f7f-079487bfb60f"). InnerVolumeSpecName "kube-api-access-dc74q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:13:58 crc kubenswrapper[4725]: I1014 14:13:58.356109 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dc74q\" (UniqueName: \"kubernetes.io/projected/6f4dbd83-e9b7-4781-8f7f-079487bfb60f-kube-api-access-dc74q\") on node \"crc\" DevicePath \"\"" Oct 14 14:13:59 crc kubenswrapper[4725]: I1014 14:13:59.089081 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88c16a30e0b558ca23605c0bab5303c3b13fa4d155e3a1e5fdf5d73ff1c7d4f0" Oct 14 14:13:59 crc kubenswrapper[4725]: I1014 14:13:59.089227 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mwvps/crc-debug-x5m98" Oct 14 14:13:59 crc kubenswrapper[4725]: I1014 14:13:59.413138 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mwvps/crc-debug-g87br"] Oct 14 14:13:59 crc kubenswrapper[4725]: E1014 14:13:59.416943 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f4dbd83-e9b7-4781-8f7f-079487bfb60f" containerName="container-00" Oct 14 14:13:59 crc kubenswrapper[4725]: I1014 14:13:59.417153 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f4dbd83-e9b7-4781-8f7f-079487bfb60f" containerName="container-00" Oct 14 14:13:59 crc kubenswrapper[4725]: I1014 14:13:59.418797 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f4dbd83-e9b7-4781-8f7f-079487bfb60f" containerName="container-00" Oct 14 14:13:59 crc kubenswrapper[4725]: I1014 14:13:59.420404 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mwvps/crc-debug-g87br" Oct 14 14:13:59 crc kubenswrapper[4725]: I1014 14:13:59.434044 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-mwvps"/"default-dockercfg-hbbt8" Oct 14 14:13:59 crc kubenswrapper[4725]: I1014 14:13:59.477417 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvqq7\" (UniqueName: \"kubernetes.io/projected/612452e8-92a5-45a9-9bf3-3216512478a3-kube-api-access-rvqq7\") pod \"crc-debug-g87br\" (UID: \"612452e8-92a5-45a9-9bf3-3216512478a3\") " pod="openshift-must-gather-mwvps/crc-debug-g87br" Oct 14 14:13:59 crc kubenswrapper[4725]: I1014 14:13:59.477506 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/612452e8-92a5-45a9-9bf3-3216512478a3-host\") pod \"crc-debug-g87br\" (UID: \"612452e8-92a5-45a9-9bf3-3216512478a3\") " pod="openshift-must-gather-mwvps/crc-debug-g87br" Oct 14 14:13:59 crc kubenswrapper[4725]: I1014 14:13:59.578938 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvqq7\" (UniqueName: \"kubernetes.io/projected/612452e8-92a5-45a9-9bf3-3216512478a3-kube-api-access-rvqq7\") pod \"crc-debug-g87br\" (UID: \"612452e8-92a5-45a9-9bf3-3216512478a3\") " pod="openshift-must-gather-mwvps/crc-debug-g87br" Oct 14 14:13:59 crc kubenswrapper[4725]: I1014 14:13:59.579020 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/612452e8-92a5-45a9-9bf3-3216512478a3-host\") pod \"crc-debug-g87br\" (UID: \"612452e8-92a5-45a9-9bf3-3216512478a3\") " pod="openshift-must-gather-mwvps/crc-debug-g87br" Oct 14 14:13:59 crc kubenswrapper[4725]: I1014 14:13:59.579182 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/612452e8-92a5-45a9-9bf3-3216512478a3-host\") pod \"crc-debug-g87br\" (UID: \"612452e8-92a5-45a9-9bf3-3216512478a3\") " pod="openshift-must-gather-mwvps/crc-debug-g87br" Oct 14 14:13:59 crc kubenswrapper[4725]: I1014 14:13:59.609894 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvqq7\" (UniqueName: \"kubernetes.io/projected/612452e8-92a5-45a9-9bf3-3216512478a3-kube-api-access-rvqq7\") pod \"crc-debug-g87br\" (UID: \"612452e8-92a5-45a9-9bf3-3216512478a3\") " pod="openshift-must-gather-mwvps/crc-debug-g87br" Oct 14 14:13:59 crc kubenswrapper[4725]: I1014 14:13:59.750774 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mwvps/crc-debug-g87br" Oct 14 14:13:59 crc kubenswrapper[4725]: W1014 14:13:59.831771 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod612452e8_92a5_45a9_9bf3_3216512478a3.slice/crio-891add022ec9312d4d547c309494de052a0ed55b57dc9f919969e5c010a772fc WatchSource:0}: Error finding container 891add022ec9312d4d547c309494de052a0ed55b57dc9f919969e5c010a772fc: Status 404 returned error can't find the container with id 891add022ec9312d4d547c309494de052a0ed55b57dc9f919969e5c010a772fc Oct 14 14:13:59 crc kubenswrapper[4725]: I1014 14:13:59.936610 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f4dbd83-e9b7-4781-8f7f-079487bfb60f" path="/var/lib/kubelet/pods/6f4dbd83-e9b7-4781-8f7f-079487bfb60f/volumes" Oct 14 14:14:00 crc kubenswrapper[4725]: I1014 14:14:00.099353 4725 generic.go:334] "Generic (PLEG): container finished" podID="612452e8-92a5-45a9-9bf3-3216512478a3" containerID="980b4bf6214d0f428b75c1df5c64c9a93d70afb197d940c645d2502f3daca366" exitCode=0 Oct 14 14:14:00 crc kubenswrapper[4725]: I1014 14:14:00.099402 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mwvps/crc-debug-g87br" event={"ID":"612452e8-92a5-45a9-9bf3-3216512478a3","Type":"ContainerDied","Data":"980b4bf6214d0f428b75c1df5c64c9a93d70afb197d940c645d2502f3daca366"} Oct 14 14:14:00 crc kubenswrapper[4725]: I1014 14:14:00.099430 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mwvps/crc-debug-g87br" event={"ID":"612452e8-92a5-45a9-9bf3-3216512478a3","Type":"ContainerStarted","Data":"891add022ec9312d4d547c309494de052a0ed55b57dc9f919969e5c010a772fc"} Oct 14 14:14:00 crc kubenswrapper[4725]: I1014 14:14:00.624175 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mwvps/crc-debug-g87br"] Oct 14 14:14:00 crc kubenswrapper[4725]: I1014 14:14:00.630642 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mwvps/crc-debug-g87br"] Oct 14 14:14:01 crc kubenswrapper[4725]: I1014 14:14:01.237936 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mwvps/crc-debug-g87br" Oct 14 14:14:01 crc kubenswrapper[4725]: I1014 14:14:01.311089 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/612452e8-92a5-45a9-9bf3-3216512478a3-host\") pod \"612452e8-92a5-45a9-9bf3-3216512478a3\" (UID: \"612452e8-92a5-45a9-9bf3-3216512478a3\") " Oct 14 14:14:01 crc kubenswrapper[4725]: I1014 14:14:01.311203 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/612452e8-92a5-45a9-9bf3-3216512478a3-host" (OuterVolumeSpecName: "host") pod "612452e8-92a5-45a9-9bf3-3216512478a3" (UID: "612452e8-92a5-45a9-9bf3-3216512478a3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 14:14:01 crc kubenswrapper[4725]: I1014 14:14:01.311210 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvqq7\" (UniqueName: \"kubernetes.io/projected/612452e8-92a5-45a9-9bf3-3216512478a3-kube-api-access-rvqq7\") pod \"612452e8-92a5-45a9-9bf3-3216512478a3\" (UID: \"612452e8-92a5-45a9-9bf3-3216512478a3\") " Oct 14 14:14:01 crc kubenswrapper[4725]: I1014 14:14:01.311634 4725 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/612452e8-92a5-45a9-9bf3-3216512478a3-host\") on node \"crc\" DevicePath \"\"" Oct 14 14:14:01 crc kubenswrapper[4725]: I1014 14:14:01.316395 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/612452e8-92a5-45a9-9bf3-3216512478a3-kube-api-access-rvqq7" (OuterVolumeSpecName: "kube-api-access-rvqq7") pod "612452e8-92a5-45a9-9bf3-3216512478a3" (UID: "612452e8-92a5-45a9-9bf3-3216512478a3"). InnerVolumeSpecName "kube-api-access-rvqq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:14:01 crc kubenswrapper[4725]: I1014 14:14:01.413089 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvqq7\" (UniqueName: \"kubernetes.io/projected/612452e8-92a5-45a9-9bf3-3216512478a3-kube-api-access-rvqq7\") on node \"crc\" DevicePath \"\"" Oct 14 14:14:01 crc kubenswrapper[4725]: I1014 14:14:01.820342 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mwvps/crc-debug-nkjz4"] Oct 14 14:14:01 crc kubenswrapper[4725]: E1014 14:14:01.820850 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="612452e8-92a5-45a9-9bf3-3216512478a3" containerName="container-00" Oct 14 14:14:01 crc kubenswrapper[4725]: I1014 14:14:01.820866 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="612452e8-92a5-45a9-9bf3-3216512478a3" containerName="container-00" Oct 14 14:14:01 crc kubenswrapper[4725]: I1014 14:14:01.821135 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="612452e8-92a5-45a9-9bf3-3216512478a3" containerName="container-00" Oct 14 14:14:01 crc kubenswrapper[4725]: I1014 14:14:01.821933 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mwvps/crc-debug-nkjz4" Oct 14 14:14:01 crc kubenswrapper[4725]: I1014 14:14:01.922745 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7v7j\" (UniqueName: \"kubernetes.io/projected/4fb94153-964f-4de4-acbb-a1dd0d04ae2e-kube-api-access-x7v7j\") pod \"crc-debug-nkjz4\" (UID: \"4fb94153-964f-4de4-acbb-a1dd0d04ae2e\") " pod="openshift-must-gather-mwvps/crc-debug-nkjz4" Oct 14 14:14:01 crc kubenswrapper[4725]: I1014 14:14:01.923094 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4fb94153-964f-4de4-acbb-a1dd0d04ae2e-host\") pod \"crc-debug-nkjz4\" (UID: \"4fb94153-964f-4de4-acbb-a1dd0d04ae2e\") " pod="openshift-must-gather-mwvps/crc-debug-nkjz4" Oct 14 14:14:01 crc kubenswrapper[4725]: I1014 14:14:01.934101 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="612452e8-92a5-45a9-9bf3-3216512478a3" path="/var/lib/kubelet/pods/612452e8-92a5-45a9-9bf3-3216512478a3/volumes" Oct 14 14:14:02 crc kubenswrapper[4725]: I1014 14:14:02.025168 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7v7j\" (UniqueName: \"kubernetes.io/projected/4fb94153-964f-4de4-acbb-a1dd0d04ae2e-kube-api-access-x7v7j\") pod \"crc-debug-nkjz4\" (UID: \"4fb94153-964f-4de4-acbb-a1dd0d04ae2e\") " pod="openshift-must-gather-mwvps/crc-debug-nkjz4" Oct 14 14:14:02 crc kubenswrapper[4725]: I1014 14:14:02.025244 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4fb94153-964f-4de4-acbb-a1dd0d04ae2e-host\") pod \"crc-debug-nkjz4\" (UID: \"4fb94153-964f-4de4-acbb-a1dd0d04ae2e\") " pod="openshift-must-gather-mwvps/crc-debug-nkjz4" Oct 14 14:14:02 crc kubenswrapper[4725]: I1014 14:14:02.025405 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4fb94153-964f-4de4-acbb-a1dd0d04ae2e-host\") pod \"crc-debug-nkjz4\" (UID: \"4fb94153-964f-4de4-acbb-a1dd0d04ae2e\") " pod="openshift-must-gather-mwvps/crc-debug-nkjz4" Oct 14 14:14:02 crc kubenswrapper[4725]: I1014 14:14:02.055163 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7v7j\" (UniqueName: \"kubernetes.io/projected/4fb94153-964f-4de4-acbb-a1dd0d04ae2e-kube-api-access-x7v7j\") pod \"crc-debug-nkjz4\" (UID: \"4fb94153-964f-4de4-acbb-a1dd0d04ae2e\") " pod="openshift-must-gather-mwvps/crc-debug-nkjz4" Oct 14 14:14:02 crc kubenswrapper[4725]: I1014 14:14:02.126596 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mwvps/crc-debug-g87br" Oct 14 14:14:02 crc kubenswrapper[4725]: I1014 14:14:02.127021 4725 scope.go:117] "RemoveContainer" containerID="980b4bf6214d0f428b75c1df5c64c9a93d70afb197d940c645d2502f3daca366" Oct 14 14:14:02 crc kubenswrapper[4725]: I1014 14:14:02.139506 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mwvps/crc-debug-nkjz4" Oct 14 14:14:02 crc kubenswrapper[4725]: I1014 14:14:02.660894 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b8b4d8db-n47v4_6497a894-212e-4478-844f-8e401f1de8fa/barbican-api/0.log" Oct 14 14:14:02 crc kubenswrapper[4725]: I1014 14:14:02.691804 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b8b4d8db-n47v4_6497a894-212e-4478-844f-8e401f1de8fa/barbican-api-log/0.log" Oct 14 14:14:02 crc kubenswrapper[4725]: I1014 14:14:02.841501 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-65974cff8b-hfxck_f6cdc779-b232-4adb-9a2e-1605f2ebadbf/barbican-keystone-listener/0.log" Oct 14 14:14:02 crc kubenswrapper[4725]: I1014 14:14:02.906932 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-65974cff8b-hfxck_f6cdc779-b232-4adb-9a2e-1605f2ebadbf/barbican-keystone-listener-log/0.log" Oct 14 14:14:03 crc kubenswrapper[4725]: I1014 14:14:03.049242 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6bf494f4f9-qtck9_af54da1d-ac00-444e-b85b-6f4a5d286dc6/barbican-worker/0.log" Oct 14 14:14:03 crc kubenswrapper[4725]: I1014 14:14:03.089154 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6bf494f4f9-qtck9_af54da1d-ac00-444e-b85b-6f4a5d286dc6/barbican-worker-log/0.log" Oct 14 14:14:03 crc kubenswrapper[4725]: I1014 14:14:03.140776 4725 generic.go:334] "Generic (PLEG): container finished" podID="4fb94153-964f-4de4-acbb-a1dd0d04ae2e" containerID="d87083b525bd65939b0f531467412ef70397ee97d402d9938902f44c019ecf30" exitCode=0 Oct 14 14:14:03 crc kubenswrapper[4725]: I1014 14:14:03.140859 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mwvps/crc-debug-nkjz4" event={"ID":"4fb94153-964f-4de4-acbb-a1dd0d04ae2e","Type":"ContainerDied","Data":"d87083b525bd65939b0f531467412ef70397ee97d402d9938902f44c019ecf30"} Oct 14 14:14:03 crc kubenswrapper[4725]: I1014 14:14:03.141176 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mwvps/crc-debug-nkjz4" event={"ID":"4fb94153-964f-4de4-acbb-a1dd0d04ae2e","Type":"ContainerStarted","Data":"eb9e1cda38e1d5d3ec2620417e11605f9b5bf6faaf455e49639227cae227210f"} Oct 14 14:14:03 crc kubenswrapper[4725]: I1014 14:14:03.181921 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mwvps/crc-debug-nkjz4"] Oct 14 14:14:03 crc kubenswrapper[4725]: I1014 14:14:03.193123 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mwvps/crc-debug-nkjz4"] Oct 14 14:14:03 crc kubenswrapper[4725]: I1014 14:14:03.294789 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8_7fb7769c-8d15-4925-8f3a-3f8e5a81fd72/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:14:03 crc kubenswrapper[4725]: I1014 14:14:03.411865 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_17b19201-3dd2-4e20-bc62-3727faed2947/ceilometer-central-agent/0.log" Oct 14 14:14:03 crc kubenswrapper[4725]: I1014 14:14:03.487330 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_17b19201-3dd2-4e20-bc62-3727faed2947/ceilometer-notification-agent/0.log" Oct 14 14:14:03 crc kubenswrapper[4725]: I1014 14:14:03.511286 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_17b19201-3dd2-4e20-bc62-3727faed2947/sg-core/0.log" Oct 14 14:14:03 crc kubenswrapper[4725]: I1014 14:14:03.519363 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_17b19201-3dd2-4e20-bc62-3727faed2947/proxy-httpd/0.log" Oct 14 14:14:03 crc kubenswrapper[4725]: I1014 14:14:03.693719 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_2909f1c8-fdcc-4565-a776-576f95ce7fa3/cinder-api/0.log" Oct 14 14:14:03 crc kubenswrapper[4725]: I1014 14:14:03.697529 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_2909f1c8-fdcc-4565-a776-576f95ce7fa3/cinder-api-log/0.log" Oct 14 14:14:03 crc kubenswrapper[4725]: I1014 14:14:03.826117 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e61091e2-13f1-418b-b9ea-0900f8cd786b/cinder-scheduler/0.log" Oct 14 14:14:03 crc kubenswrapper[4725]: I1014 14:14:03.881614 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e61091e2-13f1-418b-b9ea-0900f8cd786b/probe/0.log" Oct 14 14:14:03 crc kubenswrapper[4725]: I1014 14:14:03.940500 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-rmqbp_0292d7e3-6419-47e4-aa3f-acdfe45f9f01/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:14:04 crc kubenswrapper[4725]: I1014 14:14:04.106507 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-mnw6x_d261a991-7df5-4a6b-981d-b191a3d4702b/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:14:04 crc kubenswrapper[4725]: I1014 14:14:04.230275 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-x8xm9_b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:14:04 crc kubenswrapper[4725]: I1014 14:14:04.257711 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mwvps/crc-debug-nkjz4" Oct 14 14:14:04 crc kubenswrapper[4725]: I1014 14:14:04.358995 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-vngt8_8b1be84e-cdc5-433a-a684-b9938901a03a/init/0.log" Oct 14 14:14:04 crc kubenswrapper[4725]: I1014 14:14:04.369230 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7v7j\" (UniqueName: \"kubernetes.io/projected/4fb94153-964f-4de4-acbb-a1dd0d04ae2e-kube-api-access-x7v7j\") pod \"4fb94153-964f-4de4-acbb-a1dd0d04ae2e\" (UID: \"4fb94153-964f-4de4-acbb-a1dd0d04ae2e\") " Oct 14 14:14:04 crc kubenswrapper[4725]: I1014 14:14:04.369426 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4fb94153-964f-4de4-acbb-a1dd0d04ae2e-host\") pod \"4fb94153-964f-4de4-acbb-a1dd0d04ae2e\" (UID: \"4fb94153-964f-4de4-acbb-a1dd0d04ae2e\") " Oct 14 14:14:04 crc kubenswrapper[4725]: I1014 14:14:04.369525 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4fb94153-964f-4de4-acbb-a1dd0d04ae2e-host" (OuterVolumeSpecName: "host") pod "4fb94153-964f-4de4-acbb-a1dd0d04ae2e" (UID: "4fb94153-964f-4de4-acbb-a1dd0d04ae2e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 14:14:04 crc kubenswrapper[4725]: I1014 14:14:04.369865 4725 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4fb94153-964f-4de4-acbb-a1dd0d04ae2e-host\") on node \"crc\" DevicePath \"\"" Oct 14 14:14:04 crc kubenswrapper[4725]: I1014 14:14:04.374929 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fb94153-964f-4de4-acbb-a1dd0d04ae2e-kube-api-access-x7v7j" (OuterVolumeSpecName: "kube-api-access-x7v7j") pod "4fb94153-964f-4de4-acbb-a1dd0d04ae2e" (UID: "4fb94153-964f-4de4-acbb-a1dd0d04ae2e"). InnerVolumeSpecName "kube-api-access-x7v7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:14:04 crc kubenswrapper[4725]: I1014 14:14:04.471217 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7v7j\" (UniqueName: \"kubernetes.io/projected/4fb94153-964f-4de4-acbb-a1dd0d04ae2e-kube-api-access-x7v7j\") on node \"crc\" DevicePath \"\"" Oct 14 14:14:04 crc kubenswrapper[4725]: I1014 14:14:04.565615 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-vngt8_8b1be84e-cdc5-433a-a684-b9938901a03a/init/0.log" Oct 14 14:14:04 crc kubenswrapper[4725]: I1014 14:14:04.596283 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-vngt8_8b1be84e-cdc5-433a-a684-b9938901a03a/dnsmasq-dns/0.log" Oct 14 14:14:04 crc kubenswrapper[4725]: I1014 14:14:04.778500 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-frd7h_9ba9f871-37fb-47be-b960-dc85368cff29/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:14:04 crc kubenswrapper[4725]: I1014 14:14:04.962339 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8afd9cea-f51f-4174-b856-90b357779daf/glance-httpd/0.log" Oct 14 14:14:04 crc kubenswrapper[4725]: I1014 14:14:04.984916 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8afd9cea-f51f-4174-b856-90b357779daf/glance-log/0.log" Oct 14 14:14:05 crc kubenswrapper[4725]: I1014 14:14:05.152779 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6c7c1e20-3ab6-42f7-81f6-aa6444a258ec/glance-log/0.log" Oct 14 14:14:05 crc kubenswrapper[4725]: I1014 14:14:05.167310 4725 scope.go:117] "RemoveContainer" containerID="d87083b525bd65939b0f531467412ef70397ee97d402d9938902f44c019ecf30" Oct 14 14:14:05 crc kubenswrapper[4725]: I1014 14:14:05.167354 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mwvps/crc-debug-nkjz4" Oct 14 14:14:05 crc kubenswrapper[4725]: I1014 14:14:05.215245 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6c7c1e20-3ab6-42f7-81f6-aa6444a258ec/glance-httpd/0.log" Oct 14 14:14:05 crc kubenswrapper[4725]: I1014 14:14:05.379826 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7cdf854644-xbv6p_0f50192b-c5ae-418d-9d3a-a670d49f8ded/horizon/0.log" Oct 14 14:14:05 crc kubenswrapper[4725]: I1014 14:14:05.475971 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m_f9f32163-993e-4610-8c70-aaae1d52dc40/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:14:05 crc kubenswrapper[4725]: I1014 14:14:05.680857 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7cdf854644-xbv6p_0f50192b-c5ae-418d-9d3a-a670d49f8ded/horizon-log/0.log" Oct 14 14:14:05 crc kubenswrapper[4725]: I1014 14:14:05.702862 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-h7bvm_8b1732ba-bcae-4c5f-96d2-230b650b8552/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:14:05 crc kubenswrapper[4725]: I1014 14:14:05.936199 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fb94153-964f-4de4-acbb-a1dd0d04ae2e" path="/var/lib/kubelet/pods/4fb94153-964f-4de4-acbb-a1dd0d04ae2e/volumes" Oct 14 14:14:06 crc kubenswrapper[4725]: I1014 14:14:06.055427 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29340841-nk2gk_db5e329b-6d98-4421-a995-f34e57421846/keystone-cron/0.log" Oct 14 14:14:06 crc kubenswrapper[4725]: I1014 14:14:06.058638 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-974b44687-rnvdw_425d7189-b6f4-4bdf-8e0c-7eed10df706d/keystone-api/0.log" Oct 14 14:14:06 crc kubenswrapper[4725]: I1014 14:14:06.073056 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_545ffde3-abf2-443a-853d-c4d2a35a7e56/kube-state-metrics/0.log" Oct 14 14:14:06 crc kubenswrapper[4725]: I1014 14:14:06.261050 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz_6846c1d5-d8cd-40d7-a7b9-080f8f722248/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:14:06 crc kubenswrapper[4725]: I1014 14:14:06.615942 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-846bc7f557-srw79_8a4fb75f-e202-4df3-bb0c-bc889dd701d7/neutron-httpd/0.log" Oct 14 14:14:06 crc kubenswrapper[4725]: I1014 14:14:06.663835 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74_017a8a26-0804-4e78-972b-e74224e16f72/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:14:06 crc kubenswrapper[4725]: I1014 14:14:06.677499 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-846bc7f557-srw79_8a4fb75f-e202-4df3-bb0c-bc889dd701d7/neutron-api/0.log" Oct 14 14:14:07 crc kubenswrapper[4725]: I1014 14:14:07.240724 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_4ecc0eca-3a69-4d6c-be9c-a6af5c0fceff/nova-cell0-conductor-conductor/0.log" Oct 14 14:14:07 crc kubenswrapper[4725]: I1014 14:14:07.246901 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_451fbb86-072b-41e1-8c6a-2433844f2e6e/nova-api-log/0.log" Oct 14 14:14:07 crc kubenswrapper[4725]: I1014 14:14:07.418938 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_451fbb86-072b-41e1-8c6a-2433844f2e6e/nova-api-api/0.log" Oct 14 14:14:07 crc kubenswrapper[4725]: I1014 14:14:07.698897 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_307135f2-8895-4193-aab6-077f7bd59bec/nova-cell1-conductor-conductor/0.log" Oct 14 14:14:07 crc kubenswrapper[4725]: I1014 14:14:07.701825 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_bb3ef2cb-5705-496c-9419-7609209d830d/nova-cell1-novncproxy-novncproxy/0.log" Oct 14 14:14:07 crc kubenswrapper[4725]: I1014 14:14:07.820195 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-6dd7s_23eebac6-ae71-4a11-bad6-e58bdcb8d716/nova-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:14:08 crc kubenswrapper[4725]: I1014 14:14:08.070090 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_42ae164c-62cd-48cd-a5cf-17ce40c2cc61/nova-metadata-log/0.log" Oct 14 14:14:08 crc kubenswrapper[4725]: I1014 14:14:08.214137 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_720e68c1-d52e-4606-a3ac-a331c8039890/nova-scheduler-scheduler/0.log" Oct 14 14:14:08 crc kubenswrapper[4725]: I1014 14:14:08.297034 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_697b603d-cd65-466b-930a-86c43c8483ba/mysql-bootstrap/0.log" Oct 14 14:14:08 crc kubenswrapper[4725]: I1014 14:14:08.513631 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_697b603d-cd65-466b-930a-86c43c8483ba/mysql-bootstrap/0.log" Oct 14 14:14:08 crc kubenswrapper[4725]: I1014 14:14:08.550249 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_697b603d-cd65-466b-930a-86c43c8483ba/galera/0.log" Oct 14 14:14:08 crc kubenswrapper[4725]: I1014 14:14:08.738304 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b72e8db7-f91f-41b1-95bd-366cd156f5ed/mysql-bootstrap/0.log" Oct 14 14:14:08 crc kubenswrapper[4725]: I1014 14:14:08.966353 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b72e8db7-f91f-41b1-95bd-366cd156f5ed/galera/0.log" Oct 14 14:14:08 crc kubenswrapper[4725]: I1014 14:14:08.973347 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_42ae164c-62cd-48cd-a5cf-17ce40c2cc61/nova-metadata-metadata/0.log" Oct 14 14:14:08 crc kubenswrapper[4725]: I1014 14:14:08.999692 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b72e8db7-f91f-41b1-95bd-366cd156f5ed/mysql-bootstrap/0.log" Oct 14 14:14:09 crc kubenswrapper[4725]: I1014 14:14:09.177165 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_3e432c2c-86bc-4b07-81dd-a98be5ad1ca9/openstackclient/0.log" Oct 14 14:14:09 crc kubenswrapper[4725]: I1014 14:14:09.225771 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-lx5gg_d31243a3-876e-4912-a468-195483df0425/openstack-network-exporter/0.log" Oct 14 14:14:09 crc kubenswrapper[4725]: I1014 14:14:09.368507 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-mbwcn_02b8ba43-5172-4bac-ac99-104ddf5aea0f/ovsdb-server-init/0.log" Oct 14 14:14:09 crc kubenswrapper[4725]: I1014 14:14:09.541644 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-mbwcn_02b8ba43-5172-4bac-ac99-104ddf5aea0f/ovs-vswitchd/0.log" Oct 14 14:14:09 crc kubenswrapper[4725]: I1014 14:14:09.611079 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-mbwcn_02b8ba43-5172-4bac-ac99-104ddf5aea0f/ovsdb-server-init/0.log" Oct 14 14:14:09 crc kubenswrapper[4725]: I1014 14:14:09.706770 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-mbwcn_02b8ba43-5172-4bac-ac99-104ddf5aea0f/ovsdb-server/0.log" Oct 14 14:14:09 crc kubenswrapper[4725]: I1014 14:14:09.736043 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-x2x2g_19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c/ovn-controller/0.log" Oct 14 14:14:09 crc kubenswrapper[4725]: I1014 14:14:09.875681 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-th8rg_e7889f82-d1de-4040-a197-5444bf9951c6/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:14:10 crc kubenswrapper[4725]: I1014 14:14:10.020233 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0431437b-b27f-4e47-8a60-6aecd1148270/openstack-network-exporter/0.log" Oct 14 14:14:10 crc kubenswrapper[4725]: I1014 14:14:10.094941 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0431437b-b27f-4e47-8a60-6aecd1148270/ovn-northd/0.log" Oct 14 14:14:10 crc kubenswrapper[4725]: I1014 14:14:10.220290 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_cb174fae-72d3-45b5-b008-80b4fa482f1e/openstack-network-exporter/0.log" Oct 14 14:14:10 crc kubenswrapper[4725]: I1014 14:14:10.302867 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_cb174fae-72d3-45b5-b008-80b4fa482f1e/ovsdbserver-nb/0.log" Oct 14 14:14:10 crc kubenswrapper[4725]: I1014 14:14:10.413956 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_45a32640-96e1-4f6d-9ace-039ad444fe9e/openstack-network-exporter/0.log" Oct 14 14:14:10 crc kubenswrapper[4725]: I1014 14:14:10.564244 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_45a32640-96e1-4f6d-9ace-039ad444fe9e/ovsdbserver-sb/0.log" Oct 14 14:14:10 crc kubenswrapper[4725]: I1014 14:14:10.776868 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85547dff5b-c66wz_0163b2fd-7423-4a50-90a8-e312d0b4db22/placement-api/0.log" Oct 14 14:14:10 crc kubenswrapper[4725]: I1014 14:14:10.852228 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85547dff5b-c66wz_0163b2fd-7423-4a50-90a8-e312d0b4db22/placement-log/0.log" Oct 14 14:14:10 crc kubenswrapper[4725]: I1014 14:14:10.921331 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e/setup-container/0.log" Oct 14 14:14:11 crc kubenswrapper[4725]: I1014 14:14:11.172781 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_cbf19e88-e140-4357-8255-fdc507d7db52/setup-container/0.log" Oct 14 14:14:11 crc kubenswrapper[4725]: I1014 14:14:11.254984 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e/rabbitmq/0.log" Oct 14 14:14:11 crc kubenswrapper[4725]: I1014 14:14:11.265406 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e/setup-container/0.log" Oct 14 14:14:11 crc kubenswrapper[4725]: I1014 14:14:11.528678 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_cbf19e88-e140-4357-8255-fdc507d7db52/setup-container/0.log" Oct 14 14:14:11 crc kubenswrapper[4725]: I1014 14:14:11.590526 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-bhl58_dc2f4f76-dc9b-433c-81f6-dabc27cf63c9/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:14:11 crc kubenswrapper[4725]: I1014 14:14:11.632704 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_cbf19e88-e140-4357-8255-fdc507d7db52/rabbitmq/0.log" Oct 14 14:14:11 crc kubenswrapper[4725]: I1014 14:14:11.904420 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-z9g9d_0e7dc888-e069-4f97-84b9-02e9f37aec6c/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:14:11 crc kubenswrapper[4725]: I1014 14:14:11.907729 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn_16f2164a-fbb3-4515-8732-723ea2301364/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:14:12 crc kubenswrapper[4725]: I1014 14:14:12.147183 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-594bc_9693426d-0bd5-4a57-84ca-6491f9fdc1a0/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:14:12 crc kubenswrapper[4725]: I1014 14:14:12.213313 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-qblng_f9161175-d899-4a2e-89cc-f49f51470e2f/ssh-known-hosts-edpm-deployment/0.log" Oct 14 14:14:12 crc kubenswrapper[4725]: I1014 14:14:12.367857 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-749ff78757-kdtzj_77d143f2-1e54-4c47-a06b-90136098179d/proxy-server/0.log" Oct 14 14:14:12 crc kubenswrapper[4725]: I1014 14:14:12.458603 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-749ff78757-kdtzj_77d143f2-1e54-4c47-a06b-90136098179d/proxy-httpd/0.log" Oct 14 14:14:12 crc kubenswrapper[4725]: I1014 14:14:12.586174 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-2h97x_02dca10f-0051-499a-b8e5-636a18d74f83/swift-ring-rebalance/0.log" Oct 14 14:14:12 crc kubenswrapper[4725]: I1014 14:14:12.631701 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b115803-e57f-4651-8a38-9b1aece05cdf/account-auditor/0.log" Oct 14 14:14:12 crc kubenswrapper[4725]: I1014 14:14:12.766922 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b115803-e57f-4651-8a38-9b1aece05cdf/account-reaper/0.log" Oct 14 14:14:12 crc kubenswrapper[4725]: I1014 14:14:12.814462 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b115803-e57f-4651-8a38-9b1aece05cdf/account-replicator/0.log" Oct 14 14:14:12 crc kubenswrapper[4725]: I1014 14:14:12.834311 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b115803-e57f-4651-8a38-9b1aece05cdf/account-server/0.log" Oct 14 14:14:12 crc kubenswrapper[4725]: I1014 14:14:12.863122 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b115803-e57f-4651-8a38-9b1aece05cdf/container-auditor/0.log" Oct 14 14:14:12 crc kubenswrapper[4725]: I1014 14:14:12.992207 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b115803-e57f-4651-8a38-9b1aece05cdf/container-replicator/0.log" Oct 14 14:14:13 crc kubenswrapper[4725]: I1014 14:14:13.020623 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b115803-e57f-4651-8a38-9b1aece05cdf/container-updater/0.log" Oct 14 14:14:13 crc kubenswrapper[4725]: I1014 14:14:13.096752 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b115803-e57f-4651-8a38-9b1aece05cdf/container-server/0.log" Oct 14 14:14:13 crc kubenswrapper[4725]: I1014 14:14:13.170830 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b115803-e57f-4651-8a38-9b1aece05cdf/object-auditor/0.log" Oct 14 14:14:13 crc kubenswrapper[4725]: I1014 14:14:13.201815 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b115803-e57f-4651-8a38-9b1aece05cdf/object-expirer/0.log" Oct 14 14:14:13 crc kubenswrapper[4725]: I1014 14:14:13.240146 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b115803-e57f-4651-8a38-9b1aece05cdf/object-replicator/0.log" Oct 14 14:14:13 crc kubenswrapper[4725]: I1014 14:14:13.330218 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b115803-e57f-4651-8a38-9b1aece05cdf/object-server/0.log" Oct 14 14:14:13 crc kubenswrapper[4725]: I1014 14:14:13.358663 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b115803-e57f-4651-8a38-9b1aece05cdf/object-updater/0.log" Oct 14 14:14:13 crc kubenswrapper[4725]: I1014 14:14:13.414423 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b115803-e57f-4651-8a38-9b1aece05cdf/rsync/0.log" Oct 14 14:14:13 crc kubenswrapper[4725]: I1014 14:14:13.457584 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b115803-e57f-4651-8a38-9b1aece05cdf/swift-recon-cron/0.log" Oct 14 14:14:13 crc kubenswrapper[4725]: I1014 14:14:13.645807 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k_fd0a86b0-e908-472b-93d0-5eede843a424/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:14:13 crc kubenswrapper[4725]: I1014 14:14:13.672362 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_148578f5-c02c-4ef4-a214-87532b2d29e2/tempest-tests-tempest-tests-runner/0.log" Oct 14 14:14:14 crc kubenswrapper[4725]: I1014 14:14:14.054760 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_a47eaf15-990a-4f7e-aaec-7fc90ee8fdfd/test-operator-logs-container/0.log" Oct 14 14:14:14 crc kubenswrapper[4725]: I1014 14:14:14.168570 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-mlbgm_08321cb0-edbc-4ee5-8f5e-38084f62802a/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:14:20 crc kubenswrapper[4725]: I1014 14:14:20.250564 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_166bc2c3-283f-4c1d-815b-54fffa8192d5/memcached/0.log" Oct 14 14:14:32 crc kubenswrapper[4725]: I1014 14:14:32.521954 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 14:14:32 crc kubenswrapper[4725]: I1014 14:14:32.522554 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 14:14:36 crc kubenswrapper[4725]: I1014 14:14:36.705715 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs_61036e72-6020-469f-9406-d49df28fbd36/util/0.log" Oct 14 14:14:36 crc kubenswrapper[4725]: I1014 14:14:36.860979 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs_61036e72-6020-469f-9406-d49df28fbd36/pull/0.log" Oct 14 14:14:36 crc kubenswrapper[4725]: I1014 14:14:36.862311 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs_61036e72-6020-469f-9406-d49df28fbd36/util/0.log" Oct 14 14:14:36 crc kubenswrapper[4725]: I1014 14:14:36.916624 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs_61036e72-6020-469f-9406-d49df28fbd36/pull/0.log" Oct 14 14:14:37 crc kubenswrapper[4725]: I1014 14:14:37.043877 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs_61036e72-6020-469f-9406-d49df28fbd36/extract/0.log" Oct 14 14:14:37 crc kubenswrapper[4725]: I1014 14:14:37.066601 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs_61036e72-6020-469f-9406-d49df28fbd36/pull/0.log" Oct 14 14:14:37 crc kubenswrapper[4725]: I1014 14:14:37.068514 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs_61036e72-6020-469f-9406-d49df28fbd36/util/0.log" Oct 14 14:14:37 crc kubenswrapper[4725]: I1014 14:14:37.253304 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-64f84fcdbb-wnx2r_ea9bba90-8416-4b43-a2be-d41f635db481/kube-rbac-proxy/0.log" Oct 14 14:14:37 crc kubenswrapper[4725]: I1014 14:14:37.304207 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-64f84fcdbb-wnx2r_ea9bba90-8416-4b43-a2be-d41f635db481/manager/0.log" Oct 14 14:14:37 crc kubenswrapper[4725]: I1014 14:14:37.350481 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-59cdc64769-hbgnj_f8ae62f1-e80d-4f8c-81c3-0c5c50338046/kube-rbac-proxy/0.log" Oct 14 14:14:37 crc kubenswrapper[4725]: I1014 14:14:37.529157 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-59cdc64769-hbgnj_f8ae62f1-e80d-4f8c-81c3-0c5c50338046/manager/0.log" Oct 14 14:14:37 crc kubenswrapper[4725]: I1014 14:14:37.550130 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-687df44cdb-h8bkv_40be9ede-1b5e-4022-9626-8367074f88c1/kube-rbac-proxy/0.log" Oct 14 14:14:37 crc kubenswrapper[4725]: I1014 14:14:37.588194 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-687df44cdb-h8bkv_40be9ede-1b5e-4022-9626-8367074f88c1/manager/0.log" Oct 14 14:14:37 crc kubenswrapper[4725]: I1014 14:14:37.743264 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7bb46cd7d-pjkj4_da9db202-78ec-4df7-9ead-374e287391a2/kube-rbac-proxy/0.log" Oct 14 14:14:37 crc kubenswrapper[4725]: I1014 14:14:37.850292 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7bb46cd7d-pjkj4_da9db202-78ec-4df7-9ead-374e287391a2/manager/0.log" Oct 14 14:14:37 crc kubenswrapper[4725]: I1014 14:14:37.943723 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-6d9967f8dd-w9x95_8dc80d7a-d38e-4baa-85ae-fa856c39b48f/kube-rbac-proxy/0.log" Oct 14 14:14:38 crc kubenswrapper[4725]: I1014 14:14:38.021769 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-6d9967f8dd-w9x95_8dc80d7a-d38e-4baa-85ae-fa856c39b48f/manager/0.log" Oct 14 14:14:38 crc kubenswrapper[4725]: I1014 14:14:38.053708 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d74794d9b-96vkx_f0b42b9f-7713-48ca-b148-c42b5d2006f3/kube-rbac-proxy/0.log" Oct 14 14:14:38 crc kubenswrapper[4725]: I1014 14:14:38.163539 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d74794d9b-96vkx_f0b42b9f-7713-48ca-b148-c42b5d2006f3/manager/0.log" Oct 14 14:14:38 crc kubenswrapper[4725]: I1014 14:14:38.255664 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-585fc5b659-jgbh4_3316f5a0-820b-45ec-802a-dc3203f1d9fa/kube-rbac-proxy/0.log" Oct 14 14:14:38 crc kubenswrapper[4725]: I1014 14:14:38.390880 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-74cb5cbc49-tt2xm_64f86a1d-32a0-4133-94c3-b59ab14a0d4e/kube-rbac-proxy/0.log" Oct 14 14:14:38 crc kubenswrapper[4725]: I1014 14:14:38.444536 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-585fc5b659-jgbh4_3316f5a0-820b-45ec-802a-dc3203f1d9fa/manager/0.log" Oct 14 14:14:38 crc kubenswrapper[4725]: I1014 14:14:38.492202 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-74cb5cbc49-tt2xm_64f86a1d-32a0-4133-94c3-b59ab14a0d4e/manager/0.log" Oct 14 14:14:38 crc kubenswrapper[4725]: I1014 14:14:38.602429 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-ddb98f99b-8jwrn_95db1b3c-9877-4e8f-b756-532ccfd5db7a/kube-rbac-proxy/0.log" Oct 14 14:14:38 crc kubenswrapper[4725]: I1014 14:14:38.688196 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-ddb98f99b-8jwrn_95db1b3c-9877-4e8f-b756-532ccfd5db7a/manager/0.log" Oct 14 14:14:38 crc kubenswrapper[4725]: I1014 14:14:38.808292 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-59578bc799-4jwx9_0ba97bf9-2b5b-495b-99e2-328a987535e1/kube-rbac-proxy/0.log" Oct 14 14:14:38 crc kubenswrapper[4725]: I1014 14:14:38.824416 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-59578bc799-4jwx9_0ba97bf9-2b5b-495b-99e2-328a987535e1/manager/0.log" Oct 14 14:14:38 crc kubenswrapper[4725]: I1014 14:14:38.886827 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-5777b4f897-9q25r_960fbcd9-1563-4f84-89ac-53694a2413de/kube-rbac-proxy/0.log" Oct 14 14:14:38 crc kubenswrapper[4725]: I1014 14:14:38.999249 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-5777b4f897-9q25r_960fbcd9-1563-4f84-89ac-53694a2413de/manager/0.log" Oct 14 14:14:39 crc kubenswrapper[4725]: I1014 14:14:39.072508 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-797d478b46-55m5j_15e9c123-7b58-49ab-b7d7-f429e6b15c1e/kube-rbac-proxy/0.log" Oct 14 14:14:39 crc kubenswrapper[4725]: I1014 14:14:39.139984 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-797d478b46-55m5j_15e9c123-7b58-49ab-b7d7-f429e6b15c1e/manager/0.log" Oct 14 14:14:39 crc kubenswrapper[4725]: I1014 14:14:39.243809 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-57bb74c7bf-mvqp4_720eb902-cb6c-4d90-b25b-eae97e6d055d/kube-rbac-proxy/0.log" Oct 14 14:14:39 crc kubenswrapper[4725]: I1014 14:14:39.349341 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-57bb74c7bf-mvqp4_720eb902-cb6c-4d90-b25b-eae97e6d055d/manager/0.log" Oct 14 14:14:39 crc kubenswrapper[4725]: I1014 14:14:39.489858 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6d7c7ddf95-d258r_e4436f5f-2c38-49ee-8e53-061e294f009f/manager/0.log" Oct 14 14:14:39 crc kubenswrapper[4725]: I1014 14:14:39.492483 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6d7c7ddf95-d258r_e4436f5f-2c38-49ee-8e53-061e294f009f/kube-rbac-proxy/0.log" Oct 14 14:14:39 crc kubenswrapper[4725]: I1014 14:14:39.553398 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6cc7fb757d5s454_1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec/kube-rbac-proxy/0.log" Oct 14 14:14:39 crc kubenswrapper[4725]: I1014 14:14:39.663292 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6cc7fb757d5s454_1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec/manager/0.log" Oct 14 14:14:39 crc kubenswrapper[4725]: I1014 14:14:39.703570 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7cff5c958-wscp6_2627f8df-e54f-45e7-862a-fcacad250f2a/kube-rbac-proxy/0.log" Oct 14 14:14:39 crc kubenswrapper[4725]: I1014 14:14:39.937077 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-577669444d-64dvz_859cc4b3-36d4-43aa-8780-12ffdfef67e1/kube-rbac-proxy/0.log" Oct 14 14:14:40 crc kubenswrapper[4725]: I1014 14:14:40.320813 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-577669444d-64dvz_859cc4b3-36d4-43aa-8780-12ffdfef67e1/operator/0.log" Oct 14 14:14:40 crc kubenswrapper[4725]: I1014 14:14:40.349255 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-v8nhf_4473a60c-84a6-440a-ad02-ed3331345270/registry-server/0.log" Oct 14 14:14:40 crc kubenswrapper[4725]: I1014 14:14:40.535948 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-869cc7797f-fnf6m_28bb98a9-5146-48a5-8f4f-a3a7766ab18c/kube-rbac-proxy/0.log" Oct 14 14:14:40 crc kubenswrapper[4725]: I1014 14:14:40.588624 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-869cc7797f-fnf6m_28bb98a9-5146-48a5-8f4f-a3a7766ab18c/manager/0.log" Oct 14 14:14:40 crc kubenswrapper[4725]: I1014 14:14:40.597791 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-664664cb68-b462h_420f32f0-6ea7-4489-8165-215def02acf1/kube-rbac-proxy/0.log" Oct 14 14:14:40 crc kubenswrapper[4725]: I1014 14:14:40.745367 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7cff5c958-wscp6_2627f8df-e54f-45e7-862a-fcacad250f2a/manager/0.log" Oct 14 14:14:40 crc kubenswrapper[4725]: I1014 14:14:40.807924 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-664664cb68-b462h_420f32f0-6ea7-4489-8165-215def02acf1/manager/0.log" Oct 14 14:14:40 crc kubenswrapper[4725]: I1014 14:14:40.849901 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-tsxsk_3a364ec5-2c66-4d8d-8f51-b3b4357e7b67/operator/0.log" Oct 14 14:14:40 crc kubenswrapper[4725]: I1014 14:14:40.929569 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f4d5dfdc6-vmdhg_3d451d86-1531-4612-a2c8-5be10e09f890/kube-rbac-proxy/0.log" Oct 14 14:14:41 crc kubenswrapper[4725]: I1014 14:14:41.004415 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f4d5dfdc6-vmdhg_3d451d86-1531-4612-a2c8-5be10e09f890/manager/0.log" Oct 14 14:14:41 crc kubenswrapper[4725]: I1014 14:14:41.022353 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-578874c84d-878xx_fb53b295-5121-4b18-9d3d-e3e981a64f2d/kube-rbac-proxy/0.log" Oct 14 14:14:41 crc kubenswrapper[4725]: I1014 14:14:41.086621 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-578874c84d-878xx_fb53b295-5121-4b18-9d3d-e3e981a64f2d/manager/0.log" Oct 14 14:14:41 crc kubenswrapper[4725]: I1014 14:14:41.209991 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-ffcdd6c94-nhdz7_cb31097e-d603-4f4d-8cc1-a5f1a841ea00/kube-rbac-proxy/0.log" Oct 14 14:14:41 crc kubenswrapper[4725]: I1014 14:14:41.237068 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-ffcdd6c94-nhdz7_cb31097e-d603-4f4d-8cc1-a5f1a841ea00/manager/0.log" Oct 14 14:14:41 crc kubenswrapper[4725]: I1014 14:14:41.296724 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-646675d848-l76vk_50605e76-3c4b-480c-9a76-e84962f38851/kube-rbac-proxy/0.log" Oct 14 14:14:41 crc kubenswrapper[4725]: I1014 14:14:41.400359 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-646675d848-l76vk_50605e76-3c4b-480c-9a76-e84962f38851/manager/0.log" Oct 14 14:14:56 crc kubenswrapper[4725]: I1014 14:14:56.374978 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-658kg_10813d7e-3ed3-49a7-a2ad-5aa0db76a25d/control-plane-machine-set-operator/0.log" Oct 14 14:14:56 crc kubenswrapper[4725]: I1014 14:14:56.538487 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-r985j_1abee1b1-8c1e-43df-89cc-5381a2ef0fc6/kube-rbac-proxy/0.log" Oct 14 14:14:56 crc kubenswrapper[4725]: I1014 14:14:56.591052 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-r985j_1abee1b1-8c1e-43df-89cc-5381a2ef0fc6/machine-api-operator/0.log" Oct 14 14:15:00 crc kubenswrapper[4725]: I1014 14:15:00.199068 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340855-9v64q"] Oct 14 14:15:00 crc kubenswrapper[4725]: E1014 14:15:00.200168 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fb94153-964f-4de4-acbb-a1dd0d04ae2e" containerName="container-00" Oct 14 14:15:00 crc kubenswrapper[4725]: I1014 14:15:00.200185 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fb94153-964f-4de4-acbb-a1dd0d04ae2e" containerName="container-00" Oct 14 14:15:00 crc kubenswrapper[4725]: I1014 14:15:00.200434 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fb94153-964f-4de4-acbb-a1dd0d04ae2e" containerName="container-00" Oct 14 14:15:00 crc kubenswrapper[4725]: I1014 14:15:00.201286 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340855-9v64q" Oct 14 14:15:00 crc kubenswrapper[4725]: I1014 14:15:00.203896 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 14 14:15:00 crc kubenswrapper[4725]: I1014 14:15:00.203903 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 14 14:15:00 crc kubenswrapper[4725]: I1014 14:15:00.209137 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340855-9v64q"] Oct 14 14:15:00 crc kubenswrapper[4725]: I1014 14:15:00.322750 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e7eb405-f539-4eb1-b812-5ce3805a9341-config-volume\") pod \"collect-profiles-29340855-9v64q\" (UID: \"6e7eb405-f539-4eb1-b812-5ce3805a9341\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340855-9v64q" Oct 14 14:15:00 crc kubenswrapper[4725]: I1014 14:15:00.322793 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6e7eb405-f539-4eb1-b812-5ce3805a9341-secret-volume\") pod \"collect-profiles-29340855-9v64q\" (UID: \"6e7eb405-f539-4eb1-b812-5ce3805a9341\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340855-9v64q" Oct 14 14:15:00 crc kubenswrapper[4725]: I1014 14:15:00.322884 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktvk2\" (UniqueName: \"kubernetes.io/projected/6e7eb405-f539-4eb1-b812-5ce3805a9341-kube-api-access-ktvk2\") pod \"collect-profiles-29340855-9v64q\" (UID: \"6e7eb405-f539-4eb1-b812-5ce3805a9341\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340855-9v64q" Oct 14 14:15:00 crc kubenswrapper[4725]: I1014 14:15:00.424340 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e7eb405-f539-4eb1-b812-5ce3805a9341-config-volume\") pod \"collect-profiles-29340855-9v64q\" (UID: \"6e7eb405-f539-4eb1-b812-5ce3805a9341\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340855-9v64q" Oct 14 14:15:00 crc kubenswrapper[4725]: I1014 14:15:00.424388 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6e7eb405-f539-4eb1-b812-5ce3805a9341-secret-volume\") pod \"collect-profiles-29340855-9v64q\" (UID: \"6e7eb405-f539-4eb1-b812-5ce3805a9341\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340855-9v64q" Oct 14 14:15:00 crc kubenswrapper[4725]: I1014 14:15:00.424429 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktvk2\" (UniqueName: \"kubernetes.io/projected/6e7eb405-f539-4eb1-b812-5ce3805a9341-kube-api-access-ktvk2\") pod \"collect-profiles-29340855-9v64q\" (UID: \"6e7eb405-f539-4eb1-b812-5ce3805a9341\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340855-9v64q" Oct 14 14:15:00 crc kubenswrapper[4725]: I1014 14:15:00.426064 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e7eb405-f539-4eb1-b812-5ce3805a9341-config-volume\") pod \"collect-profiles-29340855-9v64q\" (UID: \"6e7eb405-f539-4eb1-b812-5ce3805a9341\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340855-9v64q" Oct 14 14:15:00 crc kubenswrapper[4725]: I1014 14:15:00.438189 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6e7eb405-f539-4eb1-b812-5ce3805a9341-secret-volume\") pod \"collect-profiles-29340855-9v64q\" (UID: \"6e7eb405-f539-4eb1-b812-5ce3805a9341\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340855-9v64q" Oct 14 14:15:00 crc kubenswrapper[4725]: I1014 14:15:00.445738 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktvk2\" (UniqueName: \"kubernetes.io/projected/6e7eb405-f539-4eb1-b812-5ce3805a9341-kube-api-access-ktvk2\") pod \"collect-profiles-29340855-9v64q\" (UID: \"6e7eb405-f539-4eb1-b812-5ce3805a9341\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29340855-9v64q" Oct 14 14:15:00 crc kubenswrapper[4725]: I1014 14:15:00.524955 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340855-9v64q" Oct 14 14:15:01 crc kubenswrapper[4725]: I1014 14:15:01.097658 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340855-9v64q"] Oct 14 14:15:01 crc kubenswrapper[4725]: I1014 14:15:01.665250 4725 generic.go:334] "Generic (PLEG): container finished" podID="6e7eb405-f539-4eb1-b812-5ce3805a9341" containerID="3e71db43a38d99a9b446758720d1b1b7e45a63842eea1951d09bef76eb783456" exitCode=0 Oct 14 14:15:01 crc kubenswrapper[4725]: I1014 14:15:01.665292 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340855-9v64q" event={"ID":"6e7eb405-f539-4eb1-b812-5ce3805a9341","Type":"ContainerDied","Data":"3e71db43a38d99a9b446758720d1b1b7e45a63842eea1951d09bef76eb783456"} Oct 14 14:15:01 crc kubenswrapper[4725]: I1014 14:15:01.665317 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340855-9v64q" event={"ID":"6e7eb405-f539-4eb1-b812-5ce3805a9341","Type":"ContainerStarted","Data":"9cda6d903f82461080497c89ca001a2c09ad2ec26651f1e41063bebbbc8da933"} Oct 14 14:15:02 crc kubenswrapper[4725]: I1014 14:15:02.520484 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 14:15:02 crc kubenswrapper[4725]: I1014 14:15:02.520967 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 14:15:03 crc kubenswrapper[4725]: I1014 14:15:03.039374 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340855-9v64q" Oct 14 14:15:03 crc kubenswrapper[4725]: I1014 14:15:03.188671 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktvk2\" (UniqueName: \"kubernetes.io/projected/6e7eb405-f539-4eb1-b812-5ce3805a9341-kube-api-access-ktvk2\") pod \"6e7eb405-f539-4eb1-b812-5ce3805a9341\" (UID: \"6e7eb405-f539-4eb1-b812-5ce3805a9341\") " Oct 14 14:15:03 crc kubenswrapper[4725]: I1014 14:15:03.188820 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e7eb405-f539-4eb1-b812-5ce3805a9341-config-volume\") pod \"6e7eb405-f539-4eb1-b812-5ce3805a9341\" (UID: \"6e7eb405-f539-4eb1-b812-5ce3805a9341\") " Oct 14 14:15:03 crc kubenswrapper[4725]: I1014 14:15:03.188850 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6e7eb405-f539-4eb1-b812-5ce3805a9341-secret-volume\") pod \"6e7eb405-f539-4eb1-b812-5ce3805a9341\" (UID: \"6e7eb405-f539-4eb1-b812-5ce3805a9341\") " Oct 14 14:15:03 crc kubenswrapper[4725]: I1014 14:15:03.190105 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e7eb405-f539-4eb1-b812-5ce3805a9341-config-volume" (OuterVolumeSpecName: "config-volume") pod "6e7eb405-f539-4eb1-b812-5ce3805a9341" (UID: "6e7eb405-f539-4eb1-b812-5ce3805a9341"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 14 14:15:03 crc kubenswrapper[4725]: I1014 14:15:03.195299 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e7eb405-f539-4eb1-b812-5ce3805a9341-kube-api-access-ktvk2" (OuterVolumeSpecName: "kube-api-access-ktvk2") pod "6e7eb405-f539-4eb1-b812-5ce3805a9341" (UID: "6e7eb405-f539-4eb1-b812-5ce3805a9341"). InnerVolumeSpecName "kube-api-access-ktvk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:15:03 crc kubenswrapper[4725]: I1014 14:15:03.195690 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e7eb405-f539-4eb1-b812-5ce3805a9341-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6e7eb405-f539-4eb1-b812-5ce3805a9341" (UID: "6e7eb405-f539-4eb1-b812-5ce3805a9341"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 14 14:15:03 crc kubenswrapper[4725]: I1014 14:15:03.291384 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktvk2\" (UniqueName: \"kubernetes.io/projected/6e7eb405-f539-4eb1-b812-5ce3805a9341-kube-api-access-ktvk2\") on node \"crc\" DevicePath \"\"" Oct 14 14:15:03 crc kubenswrapper[4725]: I1014 14:15:03.291434 4725 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e7eb405-f539-4eb1-b812-5ce3805a9341-config-volume\") on node \"crc\" DevicePath \"\"" Oct 14 14:15:03 crc kubenswrapper[4725]: I1014 14:15:03.291463 4725 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6e7eb405-f539-4eb1-b812-5ce3805a9341-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 14 14:15:03 crc kubenswrapper[4725]: I1014 14:15:03.681761 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29340855-9v64q" event={"ID":"6e7eb405-f539-4eb1-b812-5ce3805a9341","Type":"ContainerDied","Data":"9cda6d903f82461080497c89ca001a2c09ad2ec26651f1e41063bebbbc8da933"} Oct 14 14:15:03 crc kubenswrapper[4725]: I1014 14:15:03.682200 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cda6d903f82461080497c89ca001a2c09ad2ec26651f1e41063bebbbc8da933" Oct 14 14:15:03 crc kubenswrapper[4725]: I1014 14:15:03.681888 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29340855-9v64q" Oct 14 14:15:04 crc kubenswrapper[4725]: I1014 14:15:04.152080 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340810-gkvzs"] Oct 14 14:15:04 crc kubenswrapper[4725]: I1014 14:15:04.171586 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29340810-gkvzs"] Oct 14 14:15:05 crc kubenswrapper[4725]: I1014 14:15:05.938916 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a18d61b0-f276-4eac-b1c6-bbbc679d5059" path="/var/lib/kubelet/pods/a18d61b0-f276-4eac-b1c6-bbbc679d5059/volumes" Oct 14 14:15:09 crc kubenswrapper[4725]: I1014 14:15:09.224304 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-kpptr_39534dc6-c413-407f-a4d4-1d129d0dcdf3/cert-manager-controller/0.log" Oct 14 14:15:09 crc kubenswrapper[4725]: I1014 14:15:09.345668 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-s9g2p_f6145f04-9a33-4d9b-9158-7f6fd9bf38d3/cert-manager-cainjector/0.log" Oct 14 14:15:09 crc kubenswrapper[4725]: I1014 14:15:09.418040 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-jjlbv_65c2f894-94a7-4e13-b4f9-16bc3a11921b/cert-manager-webhook/0.log" Oct 14 14:15:21 crc kubenswrapper[4725]: I1014 14:15:21.064993 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6b874cbd85-nwf94_5ed299ff-7f75-4376-a446-2f24b1d1e539/nmstate-console-plugin/0.log" Oct 14 14:15:21 crc kubenswrapper[4725]: I1014 14:15:21.240467 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-5j272_3f7ba899-2a43-4866-b3d7-34b6ca02b7e4/nmstate-handler/0.log" Oct 14 14:15:21 crc kubenswrapper[4725]: I1014 14:15:21.247246 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-zxznn_42b236b2-dcbb-4c0c-8916-1eba5e90f301/kube-rbac-proxy/0.log" Oct 14 14:15:21 crc kubenswrapper[4725]: I1014 14:15:21.320948 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-zxznn_42b236b2-dcbb-4c0c-8916-1eba5e90f301/nmstate-metrics/0.log" Oct 14 14:15:21 crc kubenswrapper[4725]: I1014 14:15:21.436359 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-858ddd8f98-kgnhm_f4b6e00a-85f8-4036-abc1-c53043f84612/nmstate-operator/0.log" Oct 14 14:15:21 crc kubenswrapper[4725]: I1014 14:15:21.506776 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6cdbc54649-bvg5g_9a636636-e68e-4e0f-ac55-b64f6e886b0e/nmstate-webhook/0.log" Oct 14 14:15:32 crc kubenswrapper[4725]: I1014 14:15:32.520489 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 14:15:32 crc kubenswrapper[4725]: I1014 14:15:32.521055 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 14:15:32 crc kubenswrapper[4725]: I1014 14:15:32.521101 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" Oct 14 14:15:32 crc kubenswrapper[4725]: I1014 14:15:32.521846 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1e2cf79492a165ced9cc9a27ff00cb3288ac5a0abfa5f696ffa2da70ed76d129"} pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 14:15:32 crc kubenswrapper[4725]: I1014 14:15:32.521907 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" containerID="cri-o://1e2cf79492a165ced9cc9a27ff00cb3288ac5a0abfa5f696ffa2da70ed76d129" gracePeriod=600 Oct 14 14:15:32 crc kubenswrapper[4725]: E1014 14:15:32.649652 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:15:32 crc kubenswrapper[4725]: I1014 14:15:32.979157 4725 generic.go:334] "Generic (PLEG): container finished" podID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerID="1e2cf79492a165ced9cc9a27ff00cb3288ac5a0abfa5f696ffa2da70ed76d129" exitCode=0 Oct 14 14:15:32 crc kubenswrapper[4725]: I1014 14:15:32.979207 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" event={"ID":"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c","Type":"ContainerDied","Data":"1e2cf79492a165ced9cc9a27ff00cb3288ac5a0abfa5f696ffa2da70ed76d129"} Oct 14 14:15:32 crc kubenswrapper[4725]: I1014 14:15:32.979260 4725 scope.go:117] "RemoveContainer" containerID="6fee48a36d192c8eadf8b48d4f629647a2d4809a7741f2119d1d453cb7ebd3e9" Oct 14 14:15:32 crc kubenswrapper[4725]: I1014 14:15:32.980172 4725 scope.go:117] "RemoveContainer" containerID="1e2cf79492a165ced9cc9a27ff00cb3288ac5a0abfa5f696ffa2da70ed76d129" Oct 14 14:15:32 crc kubenswrapper[4725]: E1014 14:15:32.980877 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:15:34 crc kubenswrapper[4725]: I1014 14:15:34.962158 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-cs2cd_bdee3f99-134e-4020-b9a6-fdc4c66081eb/kube-rbac-proxy/0.log" Oct 14 14:15:35 crc kubenswrapper[4725]: I1014 14:15:35.075255 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-cs2cd_bdee3f99-134e-4020-b9a6-fdc4c66081eb/controller/0.log" Oct 14 14:15:35 crc kubenswrapper[4725]: I1014 14:15:35.160134 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/cp-frr-files/0.log" Oct 14 14:15:35 crc kubenswrapper[4725]: I1014 14:15:35.311719 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/cp-frr-files/0.log" Oct 14 14:15:35 crc kubenswrapper[4725]: I1014 14:15:35.345021 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/cp-reloader/0.log" Oct 14 14:15:35 crc kubenswrapper[4725]: I1014 14:15:35.345214 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/cp-metrics/0.log" Oct 14 14:15:35 crc kubenswrapper[4725]: I1014 14:15:35.345239 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/cp-reloader/0.log" Oct 14 14:15:35 crc kubenswrapper[4725]: I1014 14:15:35.504601 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/cp-reloader/0.log" Oct 14 14:15:35 crc kubenswrapper[4725]: I1014 14:15:35.509083 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/cp-frr-files/0.log" Oct 14 14:15:35 crc kubenswrapper[4725]: I1014 14:15:35.518471 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/cp-metrics/0.log" Oct 14 14:15:35 crc kubenswrapper[4725]: I1014 14:15:35.564116 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/cp-metrics/0.log" Oct 14 14:15:35 crc kubenswrapper[4725]: I1014 14:15:35.711314 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/cp-metrics/0.log" Oct 14 14:15:35 crc kubenswrapper[4725]: I1014 14:15:35.717927 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/cp-reloader/0.log" Oct 14 14:15:35 crc kubenswrapper[4725]: I1014 14:15:35.755793 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/cp-frr-files/0.log" Oct 14 14:15:35 crc kubenswrapper[4725]: I1014 14:15:35.760243 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/controller/0.log" Oct 14 14:15:35 crc kubenswrapper[4725]: I1014 14:15:35.937279 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/frr-metrics/0.log" Oct 14 14:15:35 crc kubenswrapper[4725]: I1014 14:15:35.988529 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/kube-rbac-proxy-frr/0.log" Oct 14 14:15:36 crc kubenswrapper[4725]: I1014 14:15:36.003335 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/kube-rbac-proxy/0.log" Oct 14 14:15:36 crc kubenswrapper[4725]: I1014 14:15:36.190205 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/reloader/0.log" Oct 14 14:15:36 crc kubenswrapper[4725]: I1014 14:15:36.236150 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-64bf5d555-7xnfb_9c551c0f-3df3-4ba6-8bbb-4d996cad9d45/frr-k8s-webhook-server/0.log" Oct 14 14:15:36 crc kubenswrapper[4725]: I1014 14:15:36.450395 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5c787f6f6d-k5ls2_9f68d3de-1952-4351-9955-742c297861c5/manager/0.log" Oct 14 14:15:36 crc kubenswrapper[4725]: I1014 14:15:36.728748 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2dxqc_8f68d749-82ff-45ee-b658-2324015012f7/kube-rbac-proxy/0.log" Oct 14 14:15:36 crc kubenswrapper[4725]: I1014 14:15:36.744097 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-95c4f899b-qzjhp_5c72d8f5-e412-465e-9f73-597f96b57392/webhook-server/0.log" Oct 14 14:15:37 crc kubenswrapper[4725]: I1014 14:15:37.246674 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/frr/0.log" Oct 14 14:15:37 crc kubenswrapper[4725]: I1014 14:15:37.308331 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2dxqc_8f68d749-82ff-45ee-b658-2324015012f7/speaker/0.log" Oct 14 14:15:47 crc kubenswrapper[4725]: I1014 14:15:47.921871 4725 scope.go:117] "RemoveContainer" containerID="1e2cf79492a165ced9cc9a27ff00cb3288ac5a0abfa5f696ffa2da70ed76d129" Oct 14 14:15:47 crc kubenswrapper[4725]: E1014 14:15:47.922564 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:15:49 crc kubenswrapper[4725]: I1014 14:15:49.681912 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg_f11c88bf-9dff-4a1e-825d-bcaae865c70d/util/0.log" Oct 14 14:15:49 crc kubenswrapper[4725]: I1014 14:15:49.851441 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg_f11c88bf-9dff-4a1e-825d-bcaae865c70d/util/0.log" Oct 14 14:15:49 crc kubenswrapper[4725]: I1014 14:15:49.863111 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg_f11c88bf-9dff-4a1e-825d-bcaae865c70d/pull/0.log" Oct 14 14:15:49 crc kubenswrapper[4725]: I1014 14:15:49.874429 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg_f11c88bf-9dff-4a1e-825d-bcaae865c70d/pull/0.log" Oct 14 14:15:50 crc kubenswrapper[4725]: I1014 14:15:50.070641 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg_f11c88bf-9dff-4a1e-825d-bcaae865c70d/util/0.log" Oct 14 14:15:50 crc kubenswrapper[4725]: I1014 14:15:50.107702 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg_f11c88bf-9dff-4a1e-825d-bcaae865c70d/extract/0.log" Oct 14 14:15:50 crc kubenswrapper[4725]: I1014 14:15:50.142637 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg_f11c88bf-9dff-4a1e-825d-bcaae865c70d/pull/0.log" Oct 14 14:15:50 crc kubenswrapper[4725]: I1014 14:15:50.253228 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rjr78_08e081de-8c18-4fc2-8b9b-844352989e96/extract-utilities/0.log" Oct 14 14:15:50 crc kubenswrapper[4725]: I1014 14:15:50.414970 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rjr78_08e081de-8c18-4fc2-8b9b-844352989e96/extract-content/0.log" Oct 14 14:15:50 crc kubenswrapper[4725]: I1014 14:15:50.462199 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rjr78_08e081de-8c18-4fc2-8b9b-844352989e96/extract-content/0.log" Oct 14 14:15:50 crc kubenswrapper[4725]: I1014 14:15:50.480189 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rjr78_08e081de-8c18-4fc2-8b9b-844352989e96/extract-utilities/0.log" Oct 14 14:15:50 crc kubenswrapper[4725]: I1014 14:15:50.589774 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rjr78_08e081de-8c18-4fc2-8b9b-844352989e96/extract-content/0.log" Oct 14 14:15:50 crc kubenswrapper[4725]: I1014 14:15:50.596707 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rjr78_08e081de-8c18-4fc2-8b9b-844352989e96/extract-utilities/0.log" Oct 14 14:15:50 crc kubenswrapper[4725]: I1014 14:15:50.783926 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-chnk6_48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9/extract-utilities/0.log" Oct 14 14:15:51 crc kubenswrapper[4725]: I1014 14:15:51.014591 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-chnk6_48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9/extract-content/0.log" Oct 14 14:15:51 crc kubenswrapper[4725]: I1014 14:15:51.041848 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-chnk6_48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9/extract-utilities/0.log" Oct 14 14:15:51 crc kubenswrapper[4725]: I1014 14:15:51.093759 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rjr78_08e081de-8c18-4fc2-8b9b-844352989e96/registry-server/0.log" Oct 14 14:15:51 crc kubenswrapper[4725]: I1014 14:15:51.102774 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-chnk6_48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9/extract-content/0.log" Oct 14 14:15:51 crc kubenswrapper[4725]: I1014 14:15:51.238893 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-chnk6_48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9/extract-utilities/0.log" Oct 14 14:15:51 crc kubenswrapper[4725]: I1014 14:15:51.249432 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-chnk6_48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9/extract-content/0.log" Oct 14 14:15:51 crc kubenswrapper[4725]: I1014 14:15:51.406276 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb_d0546f96-2a09-4b8f-9f0d-33615b2a71b8/util/0.log" Oct 14 14:15:51 crc kubenswrapper[4725]: I1014 14:15:51.659421 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb_d0546f96-2a09-4b8f-9f0d-33615b2a71b8/util/0.log" Oct 14 14:15:51 crc kubenswrapper[4725]: I1014 14:15:51.661594 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb_d0546f96-2a09-4b8f-9f0d-33615b2a71b8/pull/0.log" Oct 14 14:15:51 crc kubenswrapper[4725]: I1014 14:15:51.702437 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb_d0546f96-2a09-4b8f-9f0d-33615b2a71b8/pull/0.log" Oct 14 14:15:51 crc kubenswrapper[4725]: I1014 14:15:51.878810 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-chnk6_48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9/registry-server/0.log" Oct 14 14:15:51 crc kubenswrapper[4725]: I1014 14:15:51.926259 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb_d0546f96-2a09-4b8f-9f0d-33615b2a71b8/extract/0.log" Oct 14 14:15:51 crc kubenswrapper[4725]: I1014 14:15:51.935001 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb_d0546f96-2a09-4b8f-9f0d-33615b2a71b8/util/0.log" Oct 14 14:15:51 crc kubenswrapper[4725]: I1014 14:15:51.962527 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb_d0546f96-2a09-4b8f-9f0d-33615b2a71b8/pull/0.log" Oct 14 14:15:52 crc kubenswrapper[4725]: I1014 14:15:52.121014 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-q66z9_3b46f078-a8dc-4eaa-a657-4f6c85c19c06/marketplace-operator/0.log" Oct 14 14:15:52 crc kubenswrapper[4725]: I1014 14:15:52.202168 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xmq8l_d8154e8c-6473-4716-ba8a-b4141852b960/extract-utilities/0.log" Oct 14 14:15:52 crc kubenswrapper[4725]: I1014 14:15:52.363941 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xmq8l_d8154e8c-6473-4716-ba8a-b4141852b960/extract-content/0.log" Oct 14 14:15:52 crc kubenswrapper[4725]: I1014 14:15:52.373212 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xmq8l_d8154e8c-6473-4716-ba8a-b4141852b960/extract-content/0.log" Oct 14 14:15:52 crc kubenswrapper[4725]: I1014 14:15:52.416924 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xmq8l_d8154e8c-6473-4716-ba8a-b4141852b960/extract-utilities/0.log" Oct 14 14:15:52 crc kubenswrapper[4725]: I1014 14:15:52.532875 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xmq8l_d8154e8c-6473-4716-ba8a-b4141852b960/extract-utilities/0.log" Oct 14 14:15:52 crc kubenswrapper[4725]: I1014 14:15:52.549526 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xmq8l_d8154e8c-6473-4716-ba8a-b4141852b960/extract-content/0.log" Oct 14 14:15:52 crc kubenswrapper[4725]: I1014 14:15:52.745078 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xmq8l_d8154e8c-6473-4716-ba8a-b4141852b960/registry-server/0.log" Oct 14 14:15:52 crc kubenswrapper[4725]: I1014 14:15:52.750612 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g5hgh_58b9a349-dfe9-4cc9-851a-80c8bfc2f898/extract-utilities/0.log" Oct 14 14:15:52 crc kubenswrapper[4725]: I1014 14:15:52.859376 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g5hgh_58b9a349-dfe9-4cc9-851a-80c8bfc2f898/extract-content/0.log" Oct 14 14:15:52 crc kubenswrapper[4725]: I1014 14:15:52.885241 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g5hgh_58b9a349-dfe9-4cc9-851a-80c8bfc2f898/extract-utilities/0.log" Oct 14 14:15:52 crc kubenswrapper[4725]: I1014 14:15:52.936785 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g5hgh_58b9a349-dfe9-4cc9-851a-80c8bfc2f898/extract-content/0.log" Oct 14 14:15:53 crc kubenswrapper[4725]: I1014 14:15:53.109037 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g5hgh_58b9a349-dfe9-4cc9-851a-80c8bfc2f898/extract-content/0.log" Oct 14 14:15:53 crc kubenswrapper[4725]: I1014 14:15:53.114756 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g5hgh_58b9a349-dfe9-4cc9-851a-80c8bfc2f898/extract-utilities/0.log" Oct 14 14:15:53 crc kubenswrapper[4725]: I1014 14:15:53.647777 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g5hgh_58b9a349-dfe9-4cc9-851a-80c8bfc2f898/registry-server/0.log" Oct 14 14:15:59 crc kubenswrapper[4725]: I1014 14:15:59.468030 4725 scope.go:117] "RemoveContainer" containerID="a0f440f27b514c29f4343e16bd799976f5fd43f6bc903fa5e31645f37f6b9972" Oct 14 14:16:02 crc kubenswrapper[4725]: I1014 14:16:02.921382 4725 scope.go:117] "RemoveContainer" containerID="1e2cf79492a165ced9cc9a27ff00cb3288ac5a0abfa5f696ffa2da70ed76d129" Oct 14 14:16:02 crc kubenswrapper[4725]: E1014 14:16:02.922105 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:16:16 crc kubenswrapper[4725]: I1014 14:16:16.923374 4725 scope.go:117] "RemoveContainer" containerID="1e2cf79492a165ced9cc9a27ff00cb3288ac5a0abfa5f696ffa2da70ed76d129" Oct 14 14:16:16 crc kubenswrapper[4725]: E1014 14:16:16.924365 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:16:27 crc kubenswrapper[4725]: I1014 14:16:27.921289 4725 scope.go:117] "RemoveContainer" containerID="1e2cf79492a165ced9cc9a27ff00cb3288ac5a0abfa5f696ffa2da70ed76d129" Oct 14 14:16:27 crc kubenswrapper[4725]: E1014 14:16:27.922312 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:16:39 crc kubenswrapper[4725]: I1014 14:16:39.766897 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xtnjr"] Oct 14 14:16:39 crc kubenswrapper[4725]: E1014 14:16:39.767997 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e7eb405-f539-4eb1-b812-5ce3805a9341" containerName="collect-profiles" Oct 14 14:16:39 crc kubenswrapper[4725]: I1014 14:16:39.768016 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e7eb405-f539-4eb1-b812-5ce3805a9341" containerName="collect-profiles" Oct 14 14:16:39 crc kubenswrapper[4725]: I1014 14:16:39.769151 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e7eb405-f539-4eb1-b812-5ce3805a9341" containerName="collect-profiles" Oct 14 14:16:39 crc kubenswrapper[4725]: I1014 14:16:39.771038 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtnjr" Oct 14 14:16:39 crc kubenswrapper[4725]: I1014 14:16:39.781115 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xtnjr"] Oct 14 14:16:39 crc kubenswrapper[4725]: I1014 14:16:39.929134 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdwvw\" (UniqueName: \"kubernetes.io/projected/74a4189f-fe03-4e10-ae35-d14bc70c2c5b-kube-api-access-cdwvw\") pod \"redhat-operators-xtnjr\" (UID: \"74a4189f-fe03-4e10-ae35-d14bc70c2c5b\") " pod="openshift-marketplace/redhat-operators-xtnjr" Oct 14 14:16:39 crc kubenswrapper[4725]: I1014 14:16:39.929381 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a4189f-fe03-4e10-ae35-d14bc70c2c5b-catalog-content\") pod \"redhat-operators-xtnjr\" (UID: \"74a4189f-fe03-4e10-ae35-d14bc70c2c5b\") " pod="openshift-marketplace/redhat-operators-xtnjr" Oct 14 14:16:39 crc kubenswrapper[4725]: I1014 14:16:39.929464 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a4189f-fe03-4e10-ae35-d14bc70c2c5b-utilities\") pod \"redhat-operators-xtnjr\" (UID: \"74a4189f-fe03-4e10-ae35-d14bc70c2c5b\") " pod="openshift-marketplace/redhat-operators-xtnjr" Oct 14 14:16:40 crc kubenswrapper[4725]: I1014 14:16:40.031725 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a4189f-fe03-4e10-ae35-d14bc70c2c5b-utilities\") pod \"redhat-operators-xtnjr\" (UID: \"74a4189f-fe03-4e10-ae35-d14bc70c2c5b\") " pod="openshift-marketplace/redhat-operators-xtnjr" Oct 14 14:16:40 crc kubenswrapper[4725]: I1014 14:16:40.031890 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdwvw\" (UniqueName: \"kubernetes.io/projected/74a4189f-fe03-4e10-ae35-d14bc70c2c5b-kube-api-access-cdwvw\") pod \"redhat-operators-xtnjr\" (UID: \"74a4189f-fe03-4e10-ae35-d14bc70c2c5b\") " pod="openshift-marketplace/redhat-operators-xtnjr" Oct 14 14:16:40 crc kubenswrapper[4725]: I1014 14:16:40.031925 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a4189f-fe03-4e10-ae35-d14bc70c2c5b-catalog-content\") pod \"redhat-operators-xtnjr\" (UID: \"74a4189f-fe03-4e10-ae35-d14bc70c2c5b\") " pod="openshift-marketplace/redhat-operators-xtnjr" Oct 14 14:16:40 crc kubenswrapper[4725]: I1014 14:16:40.032231 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a4189f-fe03-4e10-ae35-d14bc70c2c5b-utilities\") pod \"redhat-operators-xtnjr\" (UID: \"74a4189f-fe03-4e10-ae35-d14bc70c2c5b\") " pod="openshift-marketplace/redhat-operators-xtnjr" Oct 14 14:16:40 crc kubenswrapper[4725]: I1014 14:16:40.032345 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a4189f-fe03-4e10-ae35-d14bc70c2c5b-catalog-content\") pod \"redhat-operators-xtnjr\" (UID: \"74a4189f-fe03-4e10-ae35-d14bc70c2c5b\") " pod="openshift-marketplace/redhat-operators-xtnjr" Oct 14 14:16:40 crc kubenswrapper[4725]: I1014 14:16:40.052020 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdwvw\" (UniqueName: \"kubernetes.io/projected/74a4189f-fe03-4e10-ae35-d14bc70c2c5b-kube-api-access-cdwvw\") pod \"redhat-operators-xtnjr\" (UID: \"74a4189f-fe03-4e10-ae35-d14bc70c2c5b\") " pod="openshift-marketplace/redhat-operators-xtnjr" Oct 14 14:16:40 crc kubenswrapper[4725]: I1014 14:16:40.156795 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtnjr" Oct 14 14:16:40 crc kubenswrapper[4725]: I1014 14:16:40.708144 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xtnjr"] Oct 14 14:16:40 crc kubenswrapper[4725]: I1014 14:16:40.921174 4725 scope.go:117] "RemoveContainer" containerID="1e2cf79492a165ced9cc9a27ff00cb3288ac5a0abfa5f696ffa2da70ed76d129" Oct 14 14:16:40 crc kubenswrapper[4725]: E1014 14:16:40.921767 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:16:41 crc kubenswrapper[4725]: I1014 14:16:41.690571 4725 generic.go:334] "Generic (PLEG): container finished" podID="74a4189f-fe03-4e10-ae35-d14bc70c2c5b" containerID="f66a908bf42c26171aaa12e4ac9df94a6c368113c2a0ef8dfbc43fe4cd19ed6a" exitCode=0 Oct 14 14:16:41 crc kubenswrapper[4725]: I1014 14:16:41.690964 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtnjr" event={"ID":"74a4189f-fe03-4e10-ae35-d14bc70c2c5b","Type":"ContainerDied","Data":"f66a908bf42c26171aaa12e4ac9df94a6c368113c2a0ef8dfbc43fe4cd19ed6a"} Oct 14 14:16:41 crc kubenswrapper[4725]: I1014 14:16:41.691124 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtnjr" event={"ID":"74a4189f-fe03-4e10-ae35-d14bc70c2c5b","Type":"ContainerStarted","Data":"251fe132870df442e11cc03ba6a7cb1cf0ffe6eca4b248abcf1972a1d9557285"} Oct 14 14:16:41 crc kubenswrapper[4725]: I1014 14:16:41.694836 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 14:16:43 crc kubenswrapper[4725]: I1014 14:16:43.733624 4725 generic.go:334] "Generic (PLEG): container finished" podID="74a4189f-fe03-4e10-ae35-d14bc70c2c5b" containerID="0cb1693dbb3bb335e73ee0b153add874b0252846b2bf433480f094a7408f9db5" exitCode=0 Oct 14 14:16:43 crc kubenswrapper[4725]: I1014 14:16:43.734019 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtnjr" event={"ID":"74a4189f-fe03-4e10-ae35-d14bc70c2c5b","Type":"ContainerDied","Data":"0cb1693dbb3bb335e73ee0b153add874b0252846b2bf433480f094a7408f9db5"} Oct 14 14:16:44 crc kubenswrapper[4725]: I1014 14:16:44.746053 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtnjr" event={"ID":"74a4189f-fe03-4e10-ae35-d14bc70c2c5b","Type":"ContainerStarted","Data":"e574cbd82023b75e22432ce7bc77817b59d09df10c846ee23824c6969bba16d9"} Oct 14 14:16:44 crc kubenswrapper[4725]: I1014 14:16:44.774125 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xtnjr" podStartSLOduration=3.239779547 podStartE2EDuration="5.774107758s" podCreationTimestamp="2025-10-14 14:16:39 +0000 UTC" firstStartedPulling="2025-10-14 14:16:41.694416228 +0000 UTC m=+3718.542851077" lastFinishedPulling="2025-10-14 14:16:44.228744449 +0000 UTC m=+3721.077179288" observedRunningTime="2025-10-14 14:16:44.766177584 +0000 UTC m=+3721.614612423" watchObservedRunningTime="2025-10-14 14:16:44.774107758 +0000 UTC m=+3721.622542567" Oct 14 14:16:50 crc kubenswrapper[4725]: I1014 14:16:50.159677 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xtnjr" Oct 14 14:16:50 crc kubenswrapper[4725]: I1014 14:16:50.160140 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xtnjr" Oct 14 14:16:50 crc kubenswrapper[4725]: I1014 14:16:50.220036 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xtnjr" Oct 14 14:16:50 crc kubenswrapper[4725]: I1014 14:16:50.891218 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xtnjr" Oct 14 14:16:50 crc kubenswrapper[4725]: I1014 14:16:50.988426 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xtnjr"] Oct 14 14:16:52 crc kubenswrapper[4725]: I1014 14:16:52.824373 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xtnjr" podUID="74a4189f-fe03-4e10-ae35-d14bc70c2c5b" containerName="registry-server" containerID="cri-o://e574cbd82023b75e22432ce7bc77817b59d09df10c846ee23824c6969bba16d9" gracePeriod=2 Oct 14 14:16:52 crc kubenswrapper[4725]: I1014 14:16:52.923506 4725 scope.go:117] "RemoveContainer" containerID="1e2cf79492a165ced9cc9a27ff00cb3288ac5a0abfa5f696ffa2da70ed76d129" Oct 14 14:16:52 crc kubenswrapper[4725]: E1014 14:16:52.924082 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:16:53 crc kubenswrapper[4725]: I1014 14:16:53.314745 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtnjr" Oct 14 14:16:53 crc kubenswrapper[4725]: I1014 14:16:53.411044 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a4189f-fe03-4e10-ae35-d14bc70c2c5b-utilities\") pod \"74a4189f-fe03-4e10-ae35-d14bc70c2c5b\" (UID: \"74a4189f-fe03-4e10-ae35-d14bc70c2c5b\") " Oct 14 14:16:53 crc kubenswrapper[4725]: I1014 14:16:53.411105 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdwvw\" (UniqueName: \"kubernetes.io/projected/74a4189f-fe03-4e10-ae35-d14bc70c2c5b-kube-api-access-cdwvw\") pod \"74a4189f-fe03-4e10-ae35-d14bc70c2c5b\" (UID: \"74a4189f-fe03-4e10-ae35-d14bc70c2c5b\") " Oct 14 14:16:53 crc kubenswrapper[4725]: I1014 14:16:53.411264 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a4189f-fe03-4e10-ae35-d14bc70c2c5b-catalog-content\") pod \"74a4189f-fe03-4e10-ae35-d14bc70c2c5b\" (UID: \"74a4189f-fe03-4e10-ae35-d14bc70c2c5b\") " Oct 14 14:16:53 crc kubenswrapper[4725]: I1014 14:16:53.412202 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74a4189f-fe03-4e10-ae35-d14bc70c2c5b-utilities" (OuterVolumeSpecName: "utilities") pod "74a4189f-fe03-4e10-ae35-d14bc70c2c5b" (UID: "74a4189f-fe03-4e10-ae35-d14bc70c2c5b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:16:53 crc kubenswrapper[4725]: I1014 14:16:53.416947 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74a4189f-fe03-4e10-ae35-d14bc70c2c5b-kube-api-access-cdwvw" (OuterVolumeSpecName: "kube-api-access-cdwvw") pod "74a4189f-fe03-4e10-ae35-d14bc70c2c5b" (UID: "74a4189f-fe03-4e10-ae35-d14bc70c2c5b"). InnerVolumeSpecName "kube-api-access-cdwvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:16:53 crc kubenswrapper[4725]: I1014 14:16:53.508360 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74a4189f-fe03-4e10-ae35-d14bc70c2c5b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74a4189f-fe03-4e10-ae35-d14bc70c2c5b" (UID: "74a4189f-fe03-4e10-ae35-d14bc70c2c5b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:16:53 crc kubenswrapper[4725]: I1014 14:16:53.513207 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a4189f-fe03-4e10-ae35-d14bc70c2c5b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 14:16:53 crc kubenswrapper[4725]: I1014 14:16:53.513256 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a4189f-fe03-4e10-ae35-d14bc70c2c5b-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 14:16:53 crc kubenswrapper[4725]: I1014 14:16:53.513271 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdwvw\" (UniqueName: \"kubernetes.io/projected/74a4189f-fe03-4e10-ae35-d14bc70c2c5b-kube-api-access-cdwvw\") on node \"crc\" DevicePath \"\"" Oct 14 14:16:53 crc kubenswrapper[4725]: I1014 14:16:53.834724 4725 generic.go:334] "Generic (PLEG): container finished" podID="74a4189f-fe03-4e10-ae35-d14bc70c2c5b" containerID="e574cbd82023b75e22432ce7bc77817b59d09df10c846ee23824c6969bba16d9" exitCode=0 Oct 14 14:16:53 crc kubenswrapper[4725]: I1014 14:16:53.834772 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtnjr" event={"ID":"74a4189f-fe03-4e10-ae35-d14bc70c2c5b","Type":"ContainerDied","Data":"e574cbd82023b75e22432ce7bc77817b59d09df10c846ee23824c6969bba16d9"} Oct 14 14:16:53 crc kubenswrapper[4725]: I1014 14:16:53.834794 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtnjr" Oct 14 14:16:53 crc kubenswrapper[4725]: I1014 14:16:53.834812 4725 scope.go:117] "RemoveContainer" containerID="e574cbd82023b75e22432ce7bc77817b59d09df10c846ee23824c6969bba16d9" Oct 14 14:16:53 crc kubenswrapper[4725]: I1014 14:16:53.834800 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtnjr" event={"ID":"74a4189f-fe03-4e10-ae35-d14bc70c2c5b","Type":"ContainerDied","Data":"251fe132870df442e11cc03ba6a7cb1cf0ffe6eca4b248abcf1972a1d9557285"} Oct 14 14:16:53 crc kubenswrapper[4725]: I1014 14:16:53.871487 4725 scope.go:117] "RemoveContainer" containerID="0cb1693dbb3bb335e73ee0b153add874b0252846b2bf433480f094a7408f9db5" Oct 14 14:16:53 crc kubenswrapper[4725]: I1014 14:16:53.873721 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xtnjr"] Oct 14 14:16:53 crc kubenswrapper[4725]: I1014 14:16:53.887724 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xtnjr"] Oct 14 14:16:53 crc kubenswrapper[4725]: I1014 14:16:53.905268 4725 scope.go:117] "RemoveContainer" containerID="f66a908bf42c26171aaa12e4ac9df94a6c368113c2a0ef8dfbc43fe4cd19ed6a" Oct 14 14:16:53 crc kubenswrapper[4725]: I1014 14:16:53.934061 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74a4189f-fe03-4e10-ae35-d14bc70c2c5b" path="/var/lib/kubelet/pods/74a4189f-fe03-4e10-ae35-d14bc70c2c5b/volumes" Oct 14 14:16:53 crc kubenswrapper[4725]: I1014 14:16:53.959551 4725 scope.go:117] "RemoveContainer" containerID="e574cbd82023b75e22432ce7bc77817b59d09df10c846ee23824c6969bba16d9" Oct 14 14:16:53 crc kubenswrapper[4725]: E1014 14:16:53.960388 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e574cbd82023b75e22432ce7bc77817b59d09df10c846ee23824c6969bba16d9\": container with ID starting with e574cbd82023b75e22432ce7bc77817b59d09df10c846ee23824c6969bba16d9 not found: ID does not exist" containerID="e574cbd82023b75e22432ce7bc77817b59d09df10c846ee23824c6969bba16d9" Oct 14 14:16:53 crc kubenswrapper[4725]: I1014 14:16:53.960425 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e574cbd82023b75e22432ce7bc77817b59d09df10c846ee23824c6969bba16d9"} err="failed to get container status \"e574cbd82023b75e22432ce7bc77817b59d09df10c846ee23824c6969bba16d9\": rpc error: code = NotFound desc = could not find container \"e574cbd82023b75e22432ce7bc77817b59d09df10c846ee23824c6969bba16d9\": container with ID starting with e574cbd82023b75e22432ce7bc77817b59d09df10c846ee23824c6969bba16d9 not found: ID does not exist" Oct 14 14:16:53 crc kubenswrapper[4725]: I1014 14:16:53.960446 4725 scope.go:117] "RemoveContainer" containerID="0cb1693dbb3bb335e73ee0b153add874b0252846b2bf433480f094a7408f9db5" Oct 14 14:16:53 crc kubenswrapper[4725]: E1014 14:16:53.960809 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cb1693dbb3bb335e73ee0b153add874b0252846b2bf433480f094a7408f9db5\": container with ID starting with 0cb1693dbb3bb335e73ee0b153add874b0252846b2bf433480f094a7408f9db5 not found: ID does not exist" containerID="0cb1693dbb3bb335e73ee0b153add874b0252846b2bf433480f094a7408f9db5" Oct 14 14:16:53 crc kubenswrapper[4725]: I1014 14:16:53.960842 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cb1693dbb3bb335e73ee0b153add874b0252846b2bf433480f094a7408f9db5"} err="failed to get container status \"0cb1693dbb3bb335e73ee0b153add874b0252846b2bf433480f094a7408f9db5\": rpc error: code = NotFound desc = could not find container \"0cb1693dbb3bb335e73ee0b153add874b0252846b2bf433480f094a7408f9db5\": container with ID starting with 0cb1693dbb3bb335e73ee0b153add874b0252846b2bf433480f094a7408f9db5 not found: ID does not exist" Oct 14 14:16:53 crc kubenswrapper[4725]: I1014 14:16:53.960865 4725 scope.go:117] "RemoveContainer" containerID="f66a908bf42c26171aaa12e4ac9df94a6c368113c2a0ef8dfbc43fe4cd19ed6a" Oct 14 14:16:53 crc kubenswrapper[4725]: E1014 14:16:53.961208 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f66a908bf42c26171aaa12e4ac9df94a6c368113c2a0ef8dfbc43fe4cd19ed6a\": container with ID starting with f66a908bf42c26171aaa12e4ac9df94a6c368113c2a0ef8dfbc43fe4cd19ed6a not found: ID does not exist" containerID="f66a908bf42c26171aaa12e4ac9df94a6c368113c2a0ef8dfbc43fe4cd19ed6a" Oct 14 14:16:53 crc kubenswrapper[4725]: I1014 14:16:53.961234 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f66a908bf42c26171aaa12e4ac9df94a6c368113c2a0ef8dfbc43fe4cd19ed6a"} err="failed to get container status \"f66a908bf42c26171aaa12e4ac9df94a6c368113c2a0ef8dfbc43fe4cd19ed6a\": rpc error: code = NotFound desc = could not find container \"f66a908bf42c26171aaa12e4ac9df94a6c368113c2a0ef8dfbc43fe4cd19ed6a\": container with ID starting with f66a908bf42c26171aaa12e4ac9df94a6c368113c2a0ef8dfbc43fe4cd19ed6a not found: ID does not exist" Oct 14 14:17:04 crc kubenswrapper[4725]: I1014 14:17:04.921592 4725 scope.go:117] "RemoveContainer" containerID="1e2cf79492a165ced9cc9a27ff00cb3288ac5a0abfa5f696ffa2da70ed76d129" Oct 14 14:17:04 crc kubenswrapper[4725]: E1014 14:17:04.922483 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:17:15 crc kubenswrapper[4725]: I1014 14:17:15.925326 4725 scope.go:117] "RemoveContainer" containerID="1e2cf79492a165ced9cc9a27ff00cb3288ac5a0abfa5f696ffa2da70ed76d129" Oct 14 14:17:15 crc kubenswrapper[4725]: E1014 14:17:15.926159 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:17:27 crc kubenswrapper[4725]: I1014 14:17:27.921646 4725 scope.go:117] "RemoveContainer" containerID="1e2cf79492a165ced9cc9a27ff00cb3288ac5a0abfa5f696ffa2da70ed76d129" Oct 14 14:17:27 crc kubenswrapper[4725]: E1014 14:17:27.922556 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:17:29 crc kubenswrapper[4725]: I1014 14:17:29.297665 4725 generic.go:334] "Generic (PLEG): container finished" podID="976babc1-ded9-4980-92a2-0354b78672eb" containerID="711b81e3f3c4941217d9276229f64f152ea57796f16e0b28f951c98a2c1af077" exitCode=0 Oct 14 14:17:29 crc kubenswrapper[4725]: I1014 14:17:29.297783 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mwvps/must-gather-p7fzg" event={"ID":"976babc1-ded9-4980-92a2-0354b78672eb","Type":"ContainerDied","Data":"711b81e3f3c4941217d9276229f64f152ea57796f16e0b28f951c98a2c1af077"} Oct 14 14:17:29 crc kubenswrapper[4725]: I1014 14:17:29.299140 4725 scope.go:117] "RemoveContainer" containerID="711b81e3f3c4941217d9276229f64f152ea57796f16e0b28f951c98a2c1af077" Oct 14 14:17:29 crc kubenswrapper[4725]: I1014 14:17:29.900775 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mwvps_must-gather-p7fzg_976babc1-ded9-4980-92a2-0354b78672eb/gather/0.log" Oct 14 14:17:37 crc kubenswrapper[4725]: I1014 14:17:37.579558 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mwvps/must-gather-p7fzg"] Oct 14 14:17:37 crc kubenswrapper[4725]: I1014 14:17:37.582594 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-mwvps/must-gather-p7fzg" podUID="976babc1-ded9-4980-92a2-0354b78672eb" containerName="copy" containerID="cri-o://e3b90394f98758c64dea80f2c4e4aab14cdf79155dc9fac12bfb41d530bbd34d" gracePeriod=2 Oct 14 14:17:37 crc kubenswrapper[4725]: I1014 14:17:37.639338 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mwvps/must-gather-p7fzg"] Oct 14 14:17:38 crc kubenswrapper[4725]: I1014 14:17:38.044983 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mwvps_must-gather-p7fzg_976babc1-ded9-4980-92a2-0354b78672eb/copy/0.log" Oct 14 14:17:38 crc kubenswrapper[4725]: I1014 14:17:38.046099 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mwvps/must-gather-p7fzg" Oct 14 14:17:38 crc kubenswrapper[4725]: I1014 14:17:38.092214 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52vtp\" (UniqueName: \"kubernetes.io/projected/976babc1-ded9-4980-92a2-0354b78672eb-kube-api-access-52vtp\") pod \"976babc1-ded9-4980-92a2-0354b78672eb\" (UID: \"976babc1-ded9-4980-92a2-0354b78672eb\") " Oct 14 14:17:38 crc kubenswrapper[4725]: I1014 14:17:38.092337 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/976babc1-ded9-4980-92a2-0354b78672eb-must-gather-output\") pod \"976babc1-ded9-4980-92a2-0354b78672eb\" (UID: \"976babc1-ded9-4980-92a2-0354b78672eb\") " Oct 14 14:17:38 crc kubenswrapper[4725]: I1014 14:17:38.102319 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/976babc1-ded9-4980-92a2-0354b78672eb-kube-api-access-52vtp" (OuterVolumeSpecName: "kube-api-access-52vtp") pod "976babc1-ded9-4980-92a2-0354b78672eb" (UID: "976babc1-ded9-4980-92a2-0354b78672eb"). InnerVolumeSpecName "kube-api-access-52vtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:17:38 crc kubenswrapper[4725]: I1014 14:17:38.195149 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52vtp\" (UniqueName: \"kubernetes.io/projected/976babc1-ded9-4980-92a2-0354b78672eb-kube-api-access-52vtp\") on node \"crc\" DevicePath \"\"" Oct 14 14:17:38 crc kubenswrapper[4725]: I1014 14:17:38.230690 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/976babc1-ded9-4980-92a2-0354b78672eb-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "976babc1-ded9-4980-92a2-0354b78672eb" (UID: "976babc1-ded9-4980-92a2-0354b78672eb"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:17:38 crc kubenswrapper[4725]: I1014 14:17:38.297192 4725 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/976babc1-ded9-4980-92a2-0354b78672eb-must-gather-output\") on node \"crc\" DevicePath \"\"" Oct 14 14:17:38 crc kubenswrapper[4725]: I1014 14:17:38.389471 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mwvps_must-gather-p7fzg_976babc1-ded9-4980-92a2-0354b78672eb/copy/0.log" Oct 14 14:17:38 crc kubenswrapper[4725]: I1014 14:17:38.389833 4725 generic.go:334] "Generic (PLEG): container finished" podID="976babc1-ded9-4980-92a2-0354b78672eb" containerID="e3b90394f98758c64dea80f2c4e4aab14cdf79155dc9fac12bfb41d530bbd34d" exitCode=143 Oct 14 14:17:38 crc kubenswrapper[4725]: I1014 14:17:38.389876 4725 scope.go:117] "RemoveContainer" containerID="e3b90394f98758c64dea80f2c4e4aab14cdf79155dc9fac12bfb41d530bbd34d" Oct 14 14:17:38 crc kubenswrapper[4725]: I1014 14:17:38.389931 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mwvps/must-gather-p7fzg" Oct 14 14:17:38 crc kubenswrapper[4725]: I1014 14:17:38.426778 4725 scope.go:117] "RemoveContainer" containerID="711b81e3f3c4941217d9276229f64f152ea57796f16e0b28f951c98a2c1af077" Oct 14 14:17:38 crc kubenswrapper[4725]: I1014 14:17:38.474481 4725 scope.go:117] "RemoveContainer" containerID="e3b90394f98758c64dea80f2c4e4aab14cdf79155dc9fac12bfb41d530bbd34d" Oct 14 14:17:38 crc kubenswrapper[4725]: E1014 14:17:38.474933 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3b90394f98758c64dea80f2c4e4aab14cdf79155dc9fac12bfb41d530bbd34d\": container with ID starting with e3b90394f98758c64dea80f2c4e4aab14cdf79155dc9fac12bfb41d530bbd34d not found: ID does not exist" containerID="e3b90394f98758c64dea80f2c4e4aab14cdf79155dc9fac12bfb41d530bbd34d" Oct 14 14:17:38 crc kubenswrapper[4725]: I1014 14:17:38.474964 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3b90394f98758c64dea80f2c4e4aab14cdf79155dc9fac12bfb41d530bbd34d"} err="failed to get container status \"e3b90394f98758c64dea80f2c4e4aab14cdf79155dc9fac12bfb41d530bbd34d\": rpc error: code = NotFound desc = could not find container \"e3b90394f98758c64dea80f2c4e4aab14cdf79155dc9fac12bfb41d530bbd34d\": container with ID starting with e3b90394f98758c64dea80f2c4e4aab14cdf79155dc9fac12bfb41d530bbd34d not found: ID does not exist" Oct 14 14:17:38 crc kubenswrapper[4725]: I1014 14:17:38.474983 4725 scope.go:117] "RemoveContainer" containerID="711b81e3f3c4941217d9276229f64f152ea57796f16e0b28f951c98a2c1af077" Oct 14 14:17:38 crc kubenswrapper[4725]: E1014 14:17:38.475658 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"711b81e3f3c4941217d9276229f64f152ea57796f16e0b28f951c98a2c1af077\": container with ID starting with 711b81e3f3c4941217d9276229f64f152ea57796f16e0b28f951c98a2c1af077 not found: ID does not exist" containerID="711b81e3f3c4941217d9276229f64f152ea57796f16e0b28f951c98a2c1af077" Oct 14 14:17:38 crc kubenswrapper[4725]: I1014 14:17:38.475690 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"711b81e3f3c4941217d9276229f64f152ea57796f16e0b28f951c98a2c1af077"} err="failed to get container status \"711b81e3f3c4941217d9276229f64f152ea57796f16e0b28f951c98a2c1af077\": rpc error: code = NotFound desc = could not find container \"711b81e3f3c4941217d9276229f64f152ea57796f16e0b28f951c98a2c1af077\": container with ID starting with 711b81e3f3c4941217d9276229f64f152ea57796f16e0b28f951c98a2c1af077 not found: ID does not exist" Oct 14 14:17:39 crc kubenswrapper[4725]: I1014 14:17:39.934943 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="976babc1-ded9-4980-92a2-0354b78672eb" path="/var/lib/kubelet/pods/976babc1-ded9-4980-92a2-0354b78672eb/volumes" Oct 14 14:17:41 crc kubenswrapper[4725]: I1014 14:17:41.922044 4725 scope.go:117] "RemoveContainer" containerID="1e2cf79492a165ced9cc9a27ff00cb3288ac5a0abfa5f696ffa2da70ed76d129" Oct 14 14:17:41 crc kubenswrapper[4725]: E1014 14:17:41.922989 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:17:54 crc kubenswrapper[4725]: I1014 14:17:54.921816 4725 scope.go:117] "RemoveContainer" containerID="1e2cf79492a165ced9cc9a27ff00cb3288ac5a0abfa5f696ffa2da70ed76d129" Oct 14 14:17:54 crc kubenswrapper[4725]: E1014 14:17:54.922662 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:17:59 crc kubenswrapper[4725]: I1014 14:17:59.578763 4725 scope.go:117] "RemoveContainer" containerID="c438db528cd556852c79e9b1a435151c1f09e35be299b9e476e06e8883164b44" Oct 14 14:17:59 crc kubenswrapper[4725]: I1014 14:17:59.618994 4725 scope.go:117] "RemoveContainer" containerID="2ac0b879664c33f3f4934568402048e87bc0d218ed8ca3e0d9053df7fbb0e1e5" Oct 14 14:17:59 crc kubenswrapper[4725]: I1014 14:17:59.685669 4725 scope.go:117] "RemoveContainer" containerID="28226075466cf5e3632d853e553f08ae3266ea0a3dd5c9ad27b6b9eb58eb6e98" Oct 14 14:18:05 crc kubenswrapper[4725]: I1014 14:18:05.921255 4725 scope.go:117] "RemoveContainer" containerID="1e2cf79492a165ced9cc9a27ff00cb3288ac5a0abfa5f696ffa2da70ed76d129" Oct 14 14:18:05 crc kubenswrapper[4725]: E1014 14:18:05.922045 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:18:18 crc kubenswrapper[4725]: I1014 14:18:18.921780 4725 scope.go:117] "RemoveContainer" containerID="1e2cf79492a165ced9cc9a27ff00cb3288ac5a0abfa5f696ffa2da70ed76d129" Oct 14 14:18:18 crc kubenswrapper[4725]: E1014 14:18:18.922672 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:18:23 crc kubenswrapper[4725]: I1014 14:18:23.147295 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5kqrn/must-gather-96nvv"] Oct 14 14:18:23 crc kubenswrapper[4725]: E1014 14:18:23.149708 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74a4189f-fe03-4e10-ae35-d14bc70c2c5b" containerName="extract-content" Oct 14 14:18:23 crc kubenswrapper[4725]: I1014 14:18:23.149728 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="74a4189f-fe03-4e10-ae35-d14bc70c2c5b" containerName="extract-content" Oct 14 14:18:23 crc kubenswrapper[4725]: E1014 14:18:23.149743 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="976babc1-ded9-4980-92a2-0354b78672eb" containerName="gather" Oct 14 14:18:23 crc kubenswrapper[4725]: I1014 14:18:23.149749 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="976babc1-ded9-4980-92a2-0354b78672eb" containerName="gather" Oct 14 14:18:23 crc kubenswrapper[4725]: E1014 14:18:23.149774 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="976babc1-ded9-4980-92a2-0354b78672eb" containerName="copy" Oct 14 14:18:23 crc kubenswrapper[4725]: I1014 14:18:23.149780 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="976babc1-ded9-4980-92a2-0354b78672eb" containerName="copy" Oct 14 14:18:23 crc kubenswrapper[4725]: E1014 14:18:23.149829 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74a4189f-fe03-4e10-ae35-d14bc70c2c5b" containerName="registry-server" Oct 14 14:18:23 crc kubenswrapper[4725]: I1014 14:18:23.149839 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="74a4189f-fe03-4e10-ae35-d14bc70c2c5b" containerName="registry-server" Oct 14 14:18:23 crc kubenswrapper[4725]: E1014 14:18:23.149861 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74a4189f-fe03-4e10-ae35-d14bc70c2c5b" containerName="extract-utilities" Oct 14 14:18:23 crc kubenswrapper[4725]: I1014 14:18:23.149868 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="74a4189f-fe03-4e10-ae35-d14bc70c2c5b" containerName="extract-utilities" Oct 14 14:18:23 crc kubenswrapper[4725]: I1014 14:18:23.150107 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="74a4189f-fe03-4e10-ae35-d14bc70c2c5b" containerName="registry-server" Oct 14 14:18:23 crc kubenswrapper[4725]: I1014 14:18:23.150129 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="976babc1-ded9-4980-92a2-0354b78672eb" containerName="gather" Oct 14 14:18:23 crc kubenswrapper[4725]: I1014 14:18:23.150144 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="976babc1-ded9-4980-92a2-0354b78672eb" containerName="copy" Oct 14 14:18:23 crc kubenswrapper[4725]: I1014 14:18:23.151530 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5kqrn/must-gather-96nvv" Oct 14 14:18:23 crc kubenswrapper[4725]: I1014 14:18:23.154176 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5kqrn"/"default-dockercfg-vvjjz" Oct 14 14:18:23 crc kubenswrapper[4725]: I1014 14:18:23.154437 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5kqrn"/"openshift-service-ca.crt" Oct 14 14:18:23 crc kubenswrapper[4725]: I1014 14:18:23.156415 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5kqrn"/"kube-root-ca.crt" Oct 14 14:18:23 crc kubenswrapper[4725]: I1014 14:18:23.176507 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5kqrn/must-gather-96nvv"] Oct 14 14:18:23 crc kubenswrapper[4725]: I1014 14:18:23.298085 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwktb\" (UniqueName: \"kubernetes.io/projected/ddd12d26-f51d-4648-8f6b-abee443e6911-kube-api-access-dwktb\") pod \"must-gather-96nvv\" (UID: \"ddd12d26-f51d-4648-8f6b-abee443e6911\") " pod="openshift-must-gather-5kqrn/must-gather-96nvv" Oct 14 14:18:23 crc kubenswrapper[4725]: I1014 14:18:23.298179 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ddd12d26-f51d-4648-8f6b-abee443e6911-must-gather-output\") pod \"must-gather-96nvv\" (UID: \"ddd12d26-f51d-4648-8f6b-abee443e6911\") " pod="openshift-must-gather-5kqrn/must-gather-96nvv" Oct 14 14:18:23 crc kubenswrapper[4725]: I1014 14:18:23.399578 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ddd12d26-f51d-4648-8f6b-abee443e6911-must-gather-output\") pod \"must-gather-96nvv\" (UID: \"ddd12d26-f51d-4648-8f6b-abee443e6911\") " pod="openshift-must-gather-5kqrn/must-gather-96nvv" Oct 14 14:18:23 crc kubenswrapper[4725]: I1014 14:18:23.399728 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwktb\" (UniqueName: \"kubernetes.io/projected/ddd12d26-f51d-4648-8f6b-abee443e6911-kube-api-access-dwktb\") pod \"must-gather-96nvv\" (UID: \"ddd12d26-f51d-4648-8f6b-abee443e6911\") " pod="openshift-must-gather-5kqrn/must-gather-96nvv" Oct 14 14:18:23 crc kubenswrapper[4725]: I1014 14:18:23.400286 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ddd12d26-f51d-4648-8f6b-abee443e6911-must-gather-output\") pod \"must-gather-96nvv\" (UID: \"ddd12d26-f51d-4648-8f6b-abee443e6911\") " pod="openshift-must-gather-5kqrn/must-gather-96nvv" Oct 14 14:18:23 crc kubenswrapper[4725]: I1014 14:18:23.417166 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwktb\" (UniqueName: \"kubernetes.io/projected/ddd12d26-f51d-4648-8f6b-abee443e6911-kube-api-access-dwktb\") pod \"must-gather-96nvv\" (UID: \"ddd12d26-f51d-4648-8f6b-abee443e6911\") " pod="openshift-must-gather-5kqrn/must-gather-96nvv" Oct 14 14:18:23 crc kubenswrapper[4725]: I1014 14:18:23.478660 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5kqrn/must-gather-96nvv" Oct 14 14:18:23 crc kubenswrapper[4725]: W1014 14:18:23.926894 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddd12d26_f51d_4648_8f6b_abee443e6911.slice/crio-04d9dfc047f232bc90a470789b22359d2e254ac38f4aa4981524f3d85c85000e WatchSource:0}: Error finding container 04d9dfc047f232bc90a470789b22359d2e254ac38f4aa4981524f3d85c85000e: Status 404 returned error can't find the container with id 04d9dfc047f232bc90a470789b22359d2e254ac38f4aa4981524f3d85c85000e Oct 14 14:18:23 crc kubenswrapper[4725]: I1014 14:18:23.945966 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5kqrn/must-gather-96nvv"] Oct 14 14:18:24 crc kubenswrapper[4725]: I1014 14:18:24.938419 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5kqrn/must-gather-96nvv" event={"ID":"ddd12d26-f51d-4648-8f6b-abee443e6911","Type":"ContainerStarted","Data":"95e231b9ff39b7b8ea02311d12dd7a6f0c4124d703edbacba58debdb6d4f2213"} Oct 14 14:18:24 crc kubenswrapper[4725]: I1014 14:18:24.938946 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5kqrn/must-gather-96nvv" event={"ID":"ddd12d26-f51d-4648-8f6b-abee443e6911","Type":"ContainerStarted","Data":"87a2471b84a838381e4a020b84b1a1d8b971fe47a550eb3ec0bedd4f0b4456ee"} Oct 14 14:18:24 crc kubenswrapper[4725]: I1014 14:18:24.938958 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5kqrn/must-gather-96nvv" event={"ID":"ddd12d26-f51d-4648-8f6b-abee443e6911","Type":"ContainerStarted","Data":"04d9dfc047f232bc90a470789b22359d2e254ac38f4aa4981524f3d85c85000e"} Oct 14 14:18:24 crc kubenswrapper[4725]: I1014 14:18:24.955950 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5kqrn/must-gather-96nvv" podStartSLOduration=1.955928776 podStartE2EDuration="1.955928776s" podCreationTimestamp="2025-10-14 14:18:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:18:24.953416098 +0000 UTC m=+3821.801850907" watchObservedRunningTime="2025-10-14 14:18:24.955928776 +0000 UTC m=+3821.804363575" Oct 14 14:18:27 crc kubenswrapper[4725]: I1014 14:18:27.714156 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5kqrn/crc-debug-h2cl8"] Oct 14 14:18:27 crc kubenswrapper[4725]: I1014 14:18:27.716121 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5kqrn/crc-debug-h2cl8" Oct 14 14:18:27 crc kubenswrapper[4725]: I1014 14:18:27.798807 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s46w9\" (UniqueName: \"kubernetes.io/projected/c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319-kube-api-access-s46w9\") pod \"crc-debug-h2cl8\" (UID: \"c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319\") " pod="openshift-must-gather-5kqrn/crc-debug-h2cl8" Oct 14 14:18:27 crc kubenswrapper[4725]: I1014 14:18:27.798879 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319-host\") pod \"crc-debug-h2cl8\" (UID: \"c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319\") " pod="openshift-must-gather-5kqrn/crc-debug-h2cl8" Oct 14 14:18:27 crc kubenswrapper[4725]: I1014 14:18:27.901096 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s46w9\" (UniqueName: \"kubernetes.io/projected/c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319-kube-api-access-s46w9\") pod \"crc-debug-h2cl8\" (UID: \"c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319\") " pod="openshift-must-gather-5kqrn/crc-debug-h2cl8" Oct 14 14:18:27 crc kubenswrapper[4725]: I1014 14:18:27.901225 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319-host\") pod \"crc-debug-h2cl8\" (UID: \"c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319\") " pod="openshift-must-gather-5kqrn/crc-debug-h2cl8" Oct 14 14:18:27 crc kubenswrapper[4725]: I1014 14:18:27.901358 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319-host\") pod \"crc-debug-h2cl8\" (UID: \"c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319\") " pod="openshift-must-gather-5kqrn/crc-debug-h2cl8" Oct 14 14:18:27 crc kubenswrapper[4725]: I1014 14:18:27.929370 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s46w9\" (UniqueName: \"kubernetes.io/projected/c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319-kube-api-access-s46w9\") pod \"crc-debug-h2cl8\" (UID: \"c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319\") " pod="openshift-must-gather-5kqrn/crc-debug-h2cl8" Oct 14 14:18:28 crc kubenswrapper[4725]: I1014 14:18:28.042232 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5kqrn/crc-debug-h2cl8" Oct 14 14:18:28 crc kubenswrapper[4725]: W1014 14:18:28.067503 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc79d3f8c_2b71_49ba_a5bd_0b68c5cb7319.slice/crio-807b27c1a52fb9374691145dc46cfcf748885702537dc8188f38d107b3847766 WatchSource:0}: Error finding container 807b27c1a52fb9374691145dc46cfcf748885702537dc8188f38d107b3847766: Status 404 returned error can't find the container with id 807b27c1a52fb9374691145dc46cfcf748885702537dc8188f38d107b3847766 Oct 14 14:18:28 crc kubenswrapper[4725]: I1014 14:18:28.978377 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5kqrn/crc-debug-h2cl8" event={"ID":"c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319","Type":"ContainerStarted","Data":"2d350d9bcf4f6b6e51d3f2de86c0ff30955b2271f1bb50014dc348417b0c62ae"} Oct 14 14:18:28 crc kubenswrapper[4725]: I1014 14:18:28.978821 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5kqrn/crc-debug-h2cl8" event={"ID":"c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319","Type":"ContainerStarted","Data":"807b27c1a52fb9374691145dc46cfcf748885702537dc8188f38d107b3847766"} Oct 14 14:18:28 crc kubenswrapper[4725]: I1014 14:18:28.997654 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5kqrn/crc-debug-h2cl8" podStartSLOduration=1.997622614 podStartE2EDuration="1.997622614s" podCreationTimestamp="2025-10-14 14:18:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:18:28.990873561 +0000 UTC m=+3825.839308410" watchObservedRunningTime="2025-10-14 14:18:28.997622614 +0000 UTC m=+3825.846057453" Oct 14 14:18:33 crc kubenswrapper[4725]: I1014 14:18:33.927233 4725 scope.go:117] "RemoveContainer" containerID="1e2cf79492a165ced9cc9a27ff00cb3288ac5a0abfa5f696ffa2da70ed76d129" Oct 14 14:18:33 crc kubenswrapper[4725]: E1014 14:18:33.928007 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:18:47 crc kubenswrapper[4725]: I1014 14:18:47.920786 4725 scope.go:117] "RemoveContainer" containerID="1e2cf79492a165ced9cc9a27ff00cb3288ac5a0abfa5f696ffa2da70ed76d129" Oct 14 14:18:47 crc kubenswrapper[4725]: E1014 14:18:47.921564 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:18:59 crc kubenswrapper[4725]: I1014 14:18:59.921143 4725 scope.go:117] "RemoveContainer" containerID="1e2cf79492a165ced9cc9a27ff00cb3288ac5a0abfa5f696ffa2da70ed76d129" Oct 14 14:18:59 crc kubenswrapper[4725]: E1014 14:18:59.922055 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:19:12 crc kubenswrapper[4725]: I1014 14:19:12.922034 4725 scope.go:117] "RemoveContainer" containerID="1e2cf79492a165ced9cc9a27ff00cb3288ac5a0abfa5f696ffa2da70ed76d129" Oct 14 14:19:12 crc kubenswrapper[4725]: E1014 14:19:12.922602 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:19:22 crc kubenswrapper[4725]: I1014 14:19:22.645380 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b8b4d8db-n47v4_6497a894-212e-4478-844f-8e401f1de8fa/barbican-api/0.log" Oct 14 14:19:22 crc kubenswrapper[4725]: I1014 14:19:22.670572 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b8b4d8db-n47v4_6497a894-212e-4478-844f-8e401f1de8fa/barbican-api-log/0.log" Oct 14 14:19:22 crc kubenswrapper[4725]: I1014 14:19:22.835203 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-65974cff8b-hfxck_f6cdc779-b232-4adb-9a2e-1605f2ebadbf/barbican-keystone-listener/0.log" Oct 14 14:19:22 crc kubenswrapper[4725]: I1014 14:19:22.901287 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-65974cff8b-hfxck_f6cdc779-b232-4adb-9a2e-1605f2ebadbf/barbican-keystone-listener-log/0.log" Oct 14 14:19:23 crc kubenswrapper[4725]: I1014 14:19:23.013969 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6bf494f4f9-qtck9_af54da1d-ac00-444e-b85b-6f4a5d286dc6/barbican-worker/0.log" Oct 14 14:19:23 crc kubenswrapper[4725]: I1014 14:19:23.077995 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6bf494f4f9-qtck9_af54da1d-ac00-444e-b85b-6f4a5d286dc6/barbican-worker-log/0.log" Oct 14 14:19:23 crc kubenswrapper[4725]: I1014 14:19:23.273546 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-vgkf8_7fb7769c-8d15-4925-8f3a-3f8e5a81fd72/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:19:23 crc kubenswrapper[4725]: I1014 14:19:23.461368 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_17b19201-3dd2-4e20-bc62-3727faed2947/ceilometer-central-agent/0.log" Oct 14 14:19:23 crc kubenswrapper[4725]: I1014 14:19:23.474471 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_17b19201-3dd2-4e20-bc62-3727faed2947/ceilometer-notification-agent/0.log" Oct 14 14:19:23 crc kubenswrapper[4725]: I1014 14:19:23.501394 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_17b19201-3dd2-4e20-bc62-3727faed2947/proxy-httpd/0.log" Oct 14 14:19:23 crc kubenswrapper[4725]: I1014 14:19:23.635912 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_17b19201-3dd2-4e20-bc62-3727faed2947/sg-core/0.log" Oct 14 14:19:23 crc kubenswrapper[4725]: I1014 14:19:23.726245 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_2909f1c8-fdcc-4565-a776-576f95ce7fa3/cinder-api/0.log" Oct 14 14:19:23 crc kubenswrapper[4725]: I1014 14:19:23.816621 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_2909f1c8-fdcc-4565-a776-576f95ce7fa3/cinder-api-log/0.log" Oct 14 14:19:23 crc kubenswrapper[4725]: I1014 14:19:23.954854 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e61091e2-13f1-418b-b9ea-0900f8cd786b/cinder-scheduler/0.log" Oct 14 14:19:24 crc kubenswrapper[4725]: I1014 14:19:24.049811 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e61091e2-13f1-418b-b9ea-0900f8cd786b/probe/0.log" Oct 14 14:19:24 crc kubenswrapper[4725]: I1014 14:19:24.147699 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-rmqbp_0292d7e3-6419-47e4-aa3f-acdfe45f9f01/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:19:24 crc kubenswrapper[4725]: I1014 14:19:24.249193 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-mnw6x_d261a991-7df5-4a6b-981d-b191a3d4702b/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:19:24 crc kubenswrapper[4725]: I1014 14:19:24.457979 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-x8xm9_b09bcb80-b0c5-4c20-a5a7-0fd0bbf73005/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:19:24 crc kubenswrapper[4725]: I1014 14:19:24.587420 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-vngt8_8b1be84e-cdc5-433a-a684-b9938901a03a/init/0.log" Oct 14 14:19:24 crc kubenswrapper[4725]: I1014 14:19:24.755707 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-vngt8_8b1be84e-cdc5-433a-a684-b9938901a03a/init/0.log" Oct 14 14:19:24 crc kubenswrapper[4725]: I1014 14:19:24.839220 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-vngt8_8b1be84e-cdc5-433a-a684-b9938901a03a/dnsmasq-dns/0.log" Oct 14 14:19:24 crc kubenswrapper[4725]: I1014 14:19:24.970010 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-frd7h_9ba9f871-37fb-47be-b960-dc85368cff29/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:19:25 crc kubenswrapper[4725]: I1014 14:19:25.063049 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8afd9cea-f51f-4174-b856-90b357779daf/glance-httpd/0.log" Oct 14 14:19:25 crc kubenswrapper[4725]: I1014 14:19:25.159594 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8afd9cea-f51f-4174-b856-90b357779daf/glance-log/0.log" Oct 14 14:19:25 crc kubenswrapper[4725]: I1014 14:19:25.305098 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6c7c1e20-3ab6-42f7-81f6-aa6444a258ec/glance-httpd/0.log" Oct 14 14:19:25 crc kubenswrapper[4725]: I1014 14:19:25.358987 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6c7c1e20-3ab6-42f7-81f6-aa6444a258ec/glance-log/0.log" Oct 14 14:19:25 crc kubenswrapper[4725]: I1014 14:19:25.576018 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7cdf854644-xbv6p_0f50192b-c5ae-418d-9d3a-a670d49f8ded/horizon/0.log" Oct 14 14:19:25 crc kubenswrapper[4725]: I1014 14:19:25.780483 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-qfq4m_f9f32163-993e-4610-8c70-aaae1d52dc40/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:19:25 crc kubenswrapper[4725]: I1014 14:19:25.914242 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7cdf854644-xbv6p_0f50192b-c5ae-418d-9d3a-a670d49f8ded/horizon-log/0.log" Oct 14 14:19:25 crc kubenswrapper[4725]: I1014 14:19:25.915293 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-h7bvm_8b1732ba-bcae-4c5f-96d2-230b650b8552/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:19:26 crc kubenswrapper[4725]: I1014 14:19:26.124290 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29340841-nk2gk_db5e329b-6d98-4421-a995-f34e57421846/keystone-cron/0.log" Oct 14 14:19:26 crc kubenswrapper[4725]: I1014 14:19:26.206102 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-974b44687-rnvdw_425d7189-b6f4-4bdf-8e0c-7eed10df706d/keystone-api/0.log" Oct 14 14:19:26 crc kubenswrapper[4725]: I1014 14:19:26.463523 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_545ffde3-abf2-443a-853d-c4d2a35a7e56/kube-state-metrics/0.log" Oct 14 14:19:26 crc kubenswrapper[4725]: I1014 14:19:26.546331 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-tn5zz_6846c1d5-d8cd-40d7-a7b9-080f8f722248/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:19:26 crc kubenswrapper[4725]: I1014 14:19:26.850369 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-846bc7f557-srw79_8a4fb75f-e202-4df3-bb0c-bc889dd701d7/neutron-api/0.log" Oct 14 14:19:26 crc kubenswrapper[4725]: I1014 14:19:26.860823 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-846bc7f557-srw79_8a4fb75f-e202-4df3-bb0c-bc889dd701d7/neutron-httpd/0.log" Oct 14 14:19:27 crc kubenswrapper[4725]: I1014 14:19:27.074111 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-r4q74_017a8a26-0804-4e78-972b-e74224e16f72/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:19:27 crc kubenswrapper[4725]: I1014 14:19:27.639422 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_451fbb86-072b-41e1-8c6a-2433844f2e6e/nova-api-log/0.log" Oct 14 14:19:27 crc kubenswrapper[4725]: I1014 14:19:27.871840 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_4ecc0eca-3a69-4d6c-be9c-a6af5c0fceff/nova-cell0-conductor-conductor/0.log" Oct 14 14:19:27 crc kubenswrapper[4725]: I1014 14:19:27.921050 4725 scope.go:117] "RemoveContainer" containerID="1e2cf79492a165ced9cc9a27ff00cb3288ac5a0abfa5f696ffa2da70ed76d129" Oct 14 14:19:27 crc kubenswrapper[4725]: E1014 14:19:27.921383 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:19:28 crc kubenswrapper[4725]: I1014 14:19:28.085791 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_451fbb86-072b-41e1-8c6a-2433844f2e6e/nova-api-api/0.log" Oct 14 14:19:28 crc kubenswrapper[4725]: I1014 14:19:28.247198 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_307135f2-8895-4193-aab6-077f7bd59bec/nova-cell1-conductor-conductor/0.log" Oct 14 14:19:28 crc kubenswrapper[4725]: I1014 14:19:28.434397 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_bb3ef2cb-5705-496c-9419-7609209d830d/nova-cell1-novncproxy-novncproxy/0.log" Oct 14 14:19:28 crc kubenswrapper[4725]: I1014 14:19:28.495299 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-6dd7s_23eebac6-ae71-4a11-bad6-e58bdcb8d716/nova-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:19:28 crc kubenswrapper[4725]: I1014 14:19:28.722870 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_42ae164c-62cd-48cd-a5cf-17ce40c2cc61/nova-metadata-log/0.log" Oct 14 14:19:29 crc kubenswrapper[4725]: I1014 14:19:29.167519 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_720e68c1-d52e-4606-a3ac-a331c8039890/nova-scheduler-scheduler/0.log" Oct 14 14:19:29 crc kubenswrapper[4725]: I1014 14:19:29.328570 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_697b603d-cd65-466b-930a-86c43c8483ba/mysql-bootstrap/0.log" Oct 14 14:19:29 crc kubenswrapper[4725]: I1014 14:19:29.501863 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_697b603d-cd65-466b-930a-86c43c8483ba/mysql-bootstrap/0.log" Oct 14 14:19:29 crc kubenswrapper[4725]: I1014 14:19:29.525610 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_697b603d-cd65-466b-930a-86c43c8483ba/galera/0.log" Oct 14 14:19:29 crc kubenswrapper[4725]: I1014 14:19:29.548976 4725 generic.go:334] "Generic (PLEG): container finished" podID="c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319" containerID="2d350d9bcf4f6b6e51d3f2de86c0ff30955b2271f1bb50014dc348417b0c62ae" exitCode=0 Oct 14 14:19:29 crc kubenswrapper[4725]: I1014 14:19:29.549026 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5kqrn/crc-debug-h2cl8" event={"ID":"c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319","Type":"ContainerDied","Data":"2d350d9bcf4f6b6e51d3f2de86c0ff30955b2271f1bb50014dc348417b0c62ae"} Oct 14 14:19:29 crc kubenswrapper[4725]: I1014 14:19:29.744348 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b72e8db7-f91f-41b1-95bd-366cd156f5ed/mysql-bootstrap/0.log" Oct 14 14:19:29 crc kubenswrapper[4725]: I1014 14:19:29.900009 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b72e8db7-f91f-41b1-95bd-366cd156f5ed/mysql-bootstrap/0.log" Oct 14 14:19:29 crc kubenswrapper[4725]: I1014 14:19:29.956332 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b72e8db7-f91f-41b1-95bd-366cd156f5ed/galera/0.log" Oct 14 14:19:30 crc kubenswrapper[4725]: I1014 14:19:30.067731 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_42ae164c-62cd-48cd-a5cf-17ce40c2cc61/nova-metadata-metadata/0.log" Oct 14 14:19:30 crc kubenswrapper[4725]: I1014 14:19:30.152898 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_3e432c2c-86bc-4b07-81dd-a98be5ad1ca9/openstackclient/0.log" Oct 14 14:19:30 crc kubenswrapper[4725]: I1014 14:19:30.322072 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-lx5gg_d31243a3-876e-4912-a468-195483df0425/openstack-network-exporter/0.log" Oct 14 14:19:30 crc kubenswrapper[4725]: I1014 14:19:30.506549 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-mbwcn_02b8ba43-5172-4bac-ac99-104ddf5aea0f/ovsdb-server-init/0.log" Oct 14 14:19:30 crc kubenswrapper[4725]: I1014 14:19:30.663420 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5kqrn/crc-debug-h2cl8" Oct 14 14:19:30 crc kubenswrapper[4725]: I1014 14:19:30.691241 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-mbwcn_02b8ba43-5172-4bac-ac99-104ddf5aea0f/ovs-vswitchd/0.log" Oct 14 14:19:30 crc kubenswrapper[4725]: I1014 14:19:30.708447 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5kqrn/crc-debug-h2cl8"] Oct 14 14:19:30 crc kubenswrapper[4725]: I1014 14:19:30.713187 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319-host\") pod \"c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319\" (UID: \"c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319\") " Oct 14 14:19:30 crc kubenswrapper[4725]: I1014 14:19:30.713350 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s46w9\" (UniqueName: \"kubernetes.io/projected/c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319-kube-api-access-s46w9\") pod \"c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319\" (UID: \"c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319\") " Oct 14 14:19:30 crc kubenswrapper[4725]: I1014 14:19:30.713941 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319-host" (OuterVolumeSpecName: "host") pod "c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319" (UID: "c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 14:19:30 crc kubenswrapper[4725]: I1014 14:19:30.714534 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-mbwcn_02b8ba43-5172-4bac-ac99-104ddf5aea0f/ovsdb-server-init/0.log" Oct 14 14:19:30 crc kubenswrapper[4725]: I1014 14:19:30.715838 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5kqrn/crc-debug-h2cl8"] Oct 14 14:19:30 crc kubenswrapper[4725]: I1014 14:19:30.719052 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-mbwcn_02b8ba43-5172-4bac-ac99-104ddf5aea0f/ovsdb-server/0.log" Oct 14 14:19:30 crc kubenswrapper[4725]: I1014 14:19:30.721593 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319-kube-api-access-s46w9" (OuterVolumeSpecName: "kube-api-access-s46w9") pod "c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319" (UID: "c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319"). InnerVolumeSpecName "kube-api-access-s46w9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:19:30 crc kubenswrapper[4725]: I1014 14:19:30.814926 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s46w9\" (UniqueName: \"kubernetes.io/projected/c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319-kube-api-access-s46w9\") on node \"crc\" DevicePath \"\"" Oct 14 14:19:30 crc kubenswrapper[4725]: I1014 14:19:30.814955 4725 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319-host\") on node \"crc\" DevicePath \"\"" Oct 14 14:19:30 crc kubenswrapper[4725]: I1014 14:19:30.897261 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-x2x2g_19c7a5f5-d3aa-4f3d-bb7b-2c7bb6420e1c/ovn-controller/0.log" Oct 14 14:19:31 crc kubenswrapper[4725]: I1014 14:19:31.106202 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-th8rg_e7889f82-d1de-4040-a197-5444bf9951c6/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:19:31 crc kubenswrapper[4725]: I1014 14:19:31.193703 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0431437b-b27f-4e47-8a60-6aecd1148270/openstack-network-exporter/0.log" Oct 14 14:19:31 crc kubenswrapper[4725]: I1014 14:19:31.317750 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0431437b-b27f-4e47-8a60-6aecd1148270/ovn-northd/0.log" Oct 14 14:19:31 crc kubenswrapper[4725]: I1014 14:19:31.401432 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_cb174fae-72d3-45b5-b008-80b4fa482f1e/openstack-network-exporter/0.log" Oct 14 14:19:31 crc kubenswrapper[4725]: I1014 14:19:31.496138 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_cb174fae-72d3-45b5-b008-80b4fa482f1e/ovsdbserver-nb/0.log" Oct 14 14:19:31 crc kubenswrapper[4725]: I1014 14:19:31.564269 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="807b27c1a52fb9374691145dc46cfcf748885702537dc8188f38d107b3847766" Oct 14 14:19:31 crc kubenswrapper[4725]: I1014 14:19:31.564322 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5kqrn/crc-debug-h2cl8" Oct 14 14:19:31 crc kubenswrapper[4725]: I1014 14:19:31.621522 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_45a32640-96e1-4f6d-9ace-039ad444fe9e/openstack-network-exporter/0.log" Oct 14 14:19:31 crc kubenswrapper[4725]: I1014 14:19:31.696950 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_45a32640-96e1-4f6d-9ace-039ad444fe9e/ovsdbserver-sb/0.log" Oct 14 14:19:31 crc kubenswrapper[4725]: I1014 14:19:31.883441 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85547dff5b-c66wz_0163b2fd-7423-4a50-90a8-e312d0b4db22/placement-api/0.log" Oct 14 14:19:31 crc kubenswrapper[4725]: I1014 14:19:31.930651 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319" path="/var/lib/kubelet/pods/c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319/volumes" Oct 14 14:19:31 crc kubenswrapper[4725]: I1014 14:19:31.968504 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5kqrn/crc-debug-rbmkw"] Oct 14 14:19:31 crc kubenswrapper[4725]: E1014 14:19:31.969367 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319" containerName="container-00" Oct 14 14:19:31 crc kubenswrapper[4725]: I1014 14:19:31.969387 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319" containerName="container-00" Oct 14 14:19:31 crc kubenswrapper[4725]: I1014 14:19:31.969675 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="c79d3f8c-2b71-49ba-a5bd-0b68c5cb7319" containerName="container-00" Oct 14 14:19:31 crc kubenswrapper[4725]: I1014 14:19:31.971303 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5kqrn/crc-debug-rbmkw" Oct 14 14:19:32 crc kubenswrapper[4725]: I1014 14:19:32.034313 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01244874-e49e-4148-88e2-5383c1dfccae-host\") pod \"crc-debug-rbmkw\" (UID: \"01244874-e49e-4148-88e2-5383c1dfccae\") " pod="openshift-must-gather-5kqrn/crc-debug-rbmkw" Oct 14 14:19:32 crc kubenswrapper[4725]: I1014 14:19:32.034482 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvrbr\" (UniqueName: \"kubernetes.io/projected/01244874-e49e-4148-88e2-5383c1dfccae-kube-api-access-mvrbr\") pod \"crc-debug-rbmkw\" (UID: \"01244874-e49e-4148-88e2-5383c1dfccae\") " pod="openshift-must-gather-5kqrn/crc-debug-rbmkw" Oct 14 14:19:32 crc kubenswrapper[4725]: I1014 14:19:32.055693 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-85547dff5b-c66wz_0163b2fd-7423-4a50-90a8-e312d0b4db22/placement-log/0.log" Oct 14 14:19:32 crc kubenswrapper[4725]: I1014 14:19:32.106622 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e/setup-container/0.log" Oct 14 14:19:32 crc kubenswrapper[4725]: I1014 14:19:32.136312 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01244874-e49e-4148-88e2-5383c1dfccae-host\") pod \"crc-debug-rbmkw\" (UID: \"01244874-e49e-4148-88e2-5383c1dfccae\") " pod="openshift-must-gather-5kqrn/crc-debug-rbmkw" Oct 14 14:19:32 crc kubenswrapper[4725]: I1014 14:19:32.136417 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01244874-e49e-4148-88e2-5383c1dfccae-host\") pod \"crc-debug-rbmkw\" (UID: \"01244874-e49e-4148-88e2-5383c1dfccae\") " pod="openshift-must-gather-5kqrn/crc-debug-rbmkw" Oct 14 14:19:32 crc kubenswrapper[4725]: I1014 14:19:32.136480 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvrbr\" (UniqueName: \"kubernetes.io/projected/01244874-e49e-4148-88e2-5383c1dfccae-kube-api-access-mvrbr\") pod \"crc-debug-rbmkw\" (UID: \"01244874-e49e-4148-88e2-5383c1dfccae\") " pod="openshift-must-gather-5kqrn/crc-debug-rbmkw" Oct 14 14:19:32 crc kubenswrapper[4725]: I1014 14:19:32.163394 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvrbr\" (UniqueName: \"kubernetes.io/projected/01244874-e49e-4148-88e2-5383c1dfccae-kube-api-access-mvrbr\") pod \"crc-debug-rbmkw\" (UID: \"01244874-e49e-4148-88e2-5383c1dfccae\") " pod="openshift-must-gather-5kqrn/crc-debug-rbmkw" Oct 14 14:19:32 crc kubenswrapper[4725]: I1014 14:19:32.288053 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5kqrn/crc-debug-rbmkw" Oct 14 14:19:32 crc kubenswrapper[4725]: I1014 14:19:32.309566 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e/rabbitmq/0.log" Oct 14 14:19:32 crc kubenswrapper[4725]: I1014 14:19:32.376565 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0fa2f8ba-c687-4c04-a9aa-b24fe6579d5e/setup-container/0.log" Oct 14 14:19:32 crc kubenswrapper[4725]: I1014 14:19:32.514799 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_cbf19e88-e140-4357-8255-fdc507d7db52/setup-container/0.log" Oct 14 14:19:32 crc kubenswrapper[4725]: I1014 14:19:32.574150 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5kqrn/crc-debug-rbmkw" event={"ID":"01244874-e49e-4148-88e2-5383c1dfccae","Type":"ContainerStarted","Data":"ec1acd86b8b8a6ceac891515b3ab931dcd985fdc08765b400bc627f2fb536ea7"} Oct 14 14:19:32 crc kubenswrapper[4725]: I1014 14:19:32.574353 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5kqrn/crc-debug-rbmkw" event={"ID":"01244874-e49e-4148-88e2-5383c1dfccae","Type":"ContainerStarted","Data":"01011ba38ee57c61c6a91b00119afdef67e746fd50a2509eed056a07c48e2f58"} Oct 14 14:19:32 crc kubenswrapper[4725]: I1014 14:19:32.593134 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5kqrn/crc-debug-rbmkw" podStartSLOduration=1.593116824 podStartE2EDuration="1.593116824s" podCreationTimestamp="2025-10-14 14:19:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-14 14:19:32.585538429 +0000 UTC m=+3889.433973238" watchObservedRunningTime="2025-10-14 14:19:32.593116824 +0000 UTC m=+3889.441551633" Oct 14 14:19:32 crc kubenswrapper[4725]: I1014 14:19:32.734235 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_cbf19e88-e140-4357-8255-fdc507d7db52/setup-container/0.log" Oct 14 14:19:32 crc kubenswrapper[4725]: I1014 14:19:32.745311 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_cbf19e88-e140-4357-8255-fdc507d7db52/rabbitmq/0.log" Oct 14 14:19:32 crc kubenswrapper[4725]: I1014 14:19:32.983297 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-bhl58_dc2f4f76-dc9b-433c-81f6-dabc27cf63c9/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:19:33 crc kubenswrapper[4725]: I1014 14:19:33.026911 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-z9g9d_0e7dc888-e069-4f97-84b9-02e9f37aec6c/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:19:33 crc kubenswrapper[4725]: I1014 14:19:33.183127 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-pb4gn_16f2164a-fbb3-4515-8732-723ea2301364/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:19:33 crc kubenswrapper[4725]: I1014 14:19:33.378524 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-594bc_9693426d-0bd5-4a57-84ca-6491f9fdc1a0/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:19:33 crc kubenswrapper[4725]: I1014 14:19:33.414098 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-qblng_f9161175-d899-4a2e-89cc-f49f51470e2f/ssh-known-hosts-edpm-deployment/0.log" Oct 14 14:19:33 crc kubenswrapper[4725]: I1014 14:19:33.584659 4725 generic.go:334] "Generic (PLEG): container finished" podID="01244874-e49e-4148-88e2-5383c1dfccae" containerID="ec1acd86b8b8a6ceac891515b3ab931dcd985fdc08765b400bc627f2fb536ea7" exitCode=0 Oct 14 14:19:33 crc kubenswrapper[4725]: I1014 14:19:33.584714 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5kqrn/crc-debug-rbmkw" event={"ID":"01244874-e49e-4148-88e2-5383c1dfccae","Type":"ContainerDied","Data":"ec1acd86b8b8a6ceac891515b3ab931dcd985fdc08765b400bc627f2fb536ea7"} Oct 14 14:19:33 crc kubenswrapper[4725]: I1014 14:19:33.704203 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-749ff78757-kdtzj_77d143f2-1e54-4c47-a06b-90136098179d/proxy-server/0.log" Oct 14 14:19:33 crc kubenswrapper[4725]: I1014 14:19:33.779191 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-749ff78757-kdtzj_77d143f2-1e54-4c47-a06b-90136098179d/proxy-httpd/0.log" Oct 14 14:19:33 crc kubenswrapper[4725]: I1014 14:19:33.859727 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-2h97x_02dca10f-0051-499a-b8e5-636a18d74f83/swift-ring-rebalance/0.log" Oct 14 14:19:34 crc kubenswrapper[4725]: I1014 14:19:34.062969 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b115803-e57f-4651-8a38-9b1aece05cdf/account-reaper/0.log" Oct 14 14:19:34 crc kubenswrapper[4725]: I1014 14:19:34.158414 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b115803-e57f-4651-8a38-9b1aece05cdf/account-auditor/0.log" Oct 14 14:19:34 crc kubenswrapper[4725]: I1014 14:19:34.251510 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b115803-e57f-4651-8a38-9b1aece05cdf/account-replicator/0.log" Oct 14 14:19:34 crc kubenswrapper[4725]: I1014 14:19:34.290283 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b115803-e57f-4651-8a38-9b1aece05cdf/account-server/0.log" Oct 14 14:19:34 crc kubenswrapper[4725]: I1014 14:19:34.312343 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nm285"] Oct 14 14:19:34 crc kubenswrapper[4725]: I1014 14:19:34.314211 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nm285" Oct 14 14:19:34 crc kubenswrapper[4725]: I1014 14:19:34.348435 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nm285"] Oct 14 14:19:34 crc kubenswrapper[4725]: I1014 14:19:34.373375 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b115803-e57f-4651-8a38-9b1aece05cdf/container-auditor/0.log" Oct 14 14:19:34 crc kubenswrapper[4725]: I1014 14:19:34.375248 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/340c0b5f-4be9-4cc5-a618-62ce89447281-utilities\") pod \"redhat-marketplace-nm285\" (UID: \"340c0b5f-4be9-4cc5-a618-62ce89447281\") " pod="openshift-marketplace/redhat-marketplace-nm285" Oct 14 14:19:34 crc kubenswrapper[4725]: I1014 14:19:34.375481 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/340c0b5f-4be9-4cc5-a618-62ce89447281-catalog-content\") pod \"redhat-marketplace-nm285\" (UID: \"340c0b5f-4be9-4cc5-a618-62ce89447281\") " pod="openshift-marketplace/redhat-marketplace-nm285" Oct 14 14:19:34 crc kubenswrapper[4725]: I1014 14:19:34.375738 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvtpr\" (UniqueName: \"kubernetes.io/projected/340c0b5f-4be9-4cc5-a618-62ce89447281-kube-api-access-fvtpr\") pod \"redhat-marketplace-nm285\" (UID: \"340c0b5f-4be9-4cc5-a618-62ce89447281\") " pod="openshift-marketplace/redhat-marketplace-nm285" Oct 14 14:19:34 crc kubenswrapper[4725]: I1014 14:19:34.477514 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvtpr\" (UniqueName: \"kubernetes.io/projected/340c0b5f-4be9-4cc5-a618-62ce89447281-kube-api-access-fvtpr\") pod \"redhat-marketplace-nm285\" (UID: \"340c0b5f-4be9-4cc5-a618-62ce89447281\") " pod="openshift-marketplace/redhat-marketplace-nm285" Oct 14 14:19:34 crc kubenswrapper[4725]: I1014 14:19:34.477634 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/340c0b5f-4be9-4cc5-a618-62ce89447281-utilities\") pod \"redhat-marketplace-nm285\" (UID: \"340c0b5f-4be9-4cc5-a618-62ce89447281\") " pod="openshift-marketplace/redhat-marketplace-nm285" Oct 14 14:19:34 crc kubenswrapper[4725]: I1014 14:19:34.477704 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/340c0b5f-4be9-4cc5-a618-62ce89447281-catalog-content\") pod \"redhat-marketplace-nm285\" (UID: \"340c0b5f-4be9-4cc5-a618-62ce89447281\") " pod="openshift-marketplace/redhat-marketplace-nm285" Oct 14 14:19:34 crc kubenswrapper[4725]: I1014 14:19:34.478293 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/340c0b5f-4be9-4cc5-a618-62ce89447281-catalog-content\") pod \"redhat-marketplace-nm285\" (UID: \"340c0b5f-4be9-4cc5-a618-62ce89447281\") " pod="openshift-marketplace/redhat-marketplace-nm285" Oct 14 14:19:34 crc kubenswrapper[4725]: I1014 14:19:34.478645 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/340c0b5f-4be9-4cc5-a618-62ce89447281-utilities\") pod \"redhat-marketplace-nm285\" (UID: \"340c0b5f-4be9-4cc5-a618-62ce89447281\") " pod="openshift-marketplace/redhat-marketplace-nm285" Oct 14 14:19:34 crc kubenswrapper[4725]: I1014 14:19:34.525141 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvtpr\" (UniqueName: \"kubernetes.io/projected/340c0b5f-4be9-4cc5-a618-62ce89447281-kube-api-access-fvtpr\") pod \"redhat-marketplace-nm285\" (UID: \"340c0b5f-4be9-4cc5-a618-62ce89447281\") " pod="openshift-marketplace/redhat-marketplace-nm285" Oct 14 14:19:34 crc kubenswrapper[4725]: I1014 14:19:34.588826 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b115803-e57f-4651-8a38-9b1aece05cdf/container-updater/0.log" Oct 14 14:19:34 crc kubenswrapper[4725]: I1014 14:19:34.613513 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b115803-e57f-4651-8a38-9b1aece05cdf/container-replicator/0.log" Oct 14 14:19:34 crc kubenswrapper[4725]: I1014 14:19:34.634645 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nm285" Oct 14 14:19:34 crc kubenswrapper[4725]: I1014 14:19:34.638414 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b115803-e57f-4651-8a38-9b1aece05cdf/container-server/0.log" Oct 14 14:19:34 crc kubenswrapper[4725]: I1014 14:19:34.746557 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5kqrn/crc-debug-rbmkw" Oct 14 14:19:34 crc kubenswrapper[4725]: I1014 14:19:34.807803 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5kqrn/crc-debug-rbmkw"] Oct 14 14:19:34 crc kubenswrapper[4725]: I1014 14:19:34.818250 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b115803-e57f-4651-8a38-9b1aece05cdf/object-auditor/0.log" Oct 14 14:19:34 crc kubenswrapper[4725]: I1014 14:19:34.834688 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5kqrn/crc-debug-rbmkw"] Oct 14 14:19:34 crc kubenswrapper[4725]: I1014 14:19:34.886171 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01244874-e49e-4148-88e2-5383c1dfccae-host\") pod \"01244874-e49e-4148-88e2-5383c1dfccae\" (UID: \"01244874-e49e-4148-88e2-5383c1dfccae\") " Oct 14 14:19:34 crc kubenswrapper[4725]: I1014 14:19:34.886278 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvrbr\" (UniqueName: \"kubernetes.io/projected/01244874-e49e-4148-88e2-5383c1dfccae-kube-api-access-mvrbr\") pod \"01244874-e49e-4148-88e2-5383c1dfccae\" (UID: \"01244874-e49e-4148-88e2-5383c1dfccae\") " Oct 14 14:19:34 crc kubenswrapper[4725]: I1014 14:19:34.887216 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01244874-e49e-4148-88e2-5383c1dfccae-host" (OuterVolumeSpecName: "host") pod "01244874-e49e-4148-88e2-5383c1dfccae" (UID: "01244874-e49e-4148-88e2-5383c1dfccae"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 14:19:34 crc kubenswrapper[4725]: I1014 14:19:34.888048 4725 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01244874-e49e-4148-88e2-5383c1dfccae-host\") on node \"crc\" DevicePath \"\"" Oct 14 14:19:34 crc kubenswrapper[4725]: I1014 14:19:34.904617 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01244874-e49e-4148-88e2-5383c1dfccae-kube-api-access-mvrbr" (OuterVolumeSpecName: "kube-api-access-mvrbr") pod "01244874-e49e-4148-88e2-5383c1dfccae" (UID: "01244874-e49e-4148-88e2-5383c1dfccae"). InnerVolumeSpecName "kube-api-access-mvrbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:19:34 crc kubenswrapper[4725]: I1014 14:19:34.929586 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b115803-e57f-4651-8a38-9b1aece05cdf/object-expirer/0.log" Oct 14 14:19:34 crc kubenswrapper[4725]: I1014 14:19:34.955494 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b115803-e57f-4651-8a38-9b1aece05cdf/object-replicator/0.log" Oct 14 14:19:34 crc kubenswrapper[4725]: I1014 14:19:34.990228 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvrbr\" (UniqueName: \"kubernetes.io/projected/01244874-e49e-4148-88e2-5383c1dfccae-kube-api-access-mvrbr\") on node \"crc\" DevicePath \"\"" Oct 14 14:19:35 crc kubenswrapper[4725]: I1014 14:19:35.080703 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b115803-e57f-4651-8a38-9b1aece05cdf/object-server/0.log" Oct 14 14:19:35 crc kubenswrapper[4725]: I1014 14:19:35.130572 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nm285"] Oct 14 14:19:35 crc kubenswrapper[4725]: I1014 14:19:35.137417 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b115803-e57f-4651-8a38-9b1aece05cdf/object-updater/0.log" Oct 14 14:19:35 crc kubenswrapper[4725]: I1014 14:19:35.216082 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b115803-e57f-4651-8a38-9b1aece05cdf/rsync/0.log" Oct 14 14:19:35 crc kubenswrapper[4725]: I1014 14:19:35.295431 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8b115803-e57f-4651-8a38-9b1aece05cdf/swift-recon-cron/0.log" Oct 14 14:19:35 crc kubenswrapper[4725]: I1014 14:19:35.505048 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-2zd2k_fd0a86b0-e908-472b-93d0-5eede843a424/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:19:35 crc kubenswrapper[4725]: I1014 14:19:35.602551 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01011ba38ee57c61c6a91b00119afdef67e746fd50a2509eed056a07c48e2f58" Oct 14 14:19:35 crc kubenswrapper[4725]: I1014 14:19:35.602587 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5kqrn/crc-debug-rbmkw" Oct 14 14:19:35 crc kubenswrapper[4725]: I1014 14:19:35.605471 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nm285" event={"ID":"340c0b5f-4be9-4cc5-a618-62ce89447281","Type":"ContainerDied","Data":"531424d61ac91291c3f744c19cf4802abafb5f8af63e7d1ebd1e0c5788d2a341"} Oct 14 14:19:35 crc kubenswrapper[4725]: I1014 14:19:35.605399 4725 generic.go:334] "Generic (PLEG): container finished" podID="340c0b5f-4be9-4cc5-a618-62ce89447281" containerID="531424d61ac91291c3f744c19cf4802abafb5f8af63e7d1ebd1e0c5788d2a341" exitCode=0 Oct 14 14:19:35 crc kubenswrapper[4725]: I1014 14:19:35.605675 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nm285" event={"ID":"340c0b5f-4be9-4cc5-a618-62ce89447281","Type":"ContainerStarted","Data":"d0d3ba8975ae760013ce5c17160b9103adc07ec81a31a2cdc3dc3830ffd39099"} Oct 14 14:19:35 crc kubenswrapper[4725]: I1014 14:19:35.621953 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_148578f5-c02c-4ef4-a214-87532b2d29e2/tempest-tests-tempest-tests-runner/0.log" Oct 14 14:19:35 crc kubenswrapper[4725]: I1014 14:19:35.748050 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_a47eaf15-990a-4f7e-aaec-7fc90ee8fdfd/test-operator-logs-container/0.log" Oct 14 14:19:35 crc kubenswrapper[4725]: I1014 14:19:35.933075 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01244874-e49e-4148-88e2-5383c1dfccae" path="/var/lib/kubelet/pods/01244874-e49e-4148-88e2-5383c1dfccae/volumes" Oct 14 14:19:35 crc kubenswrapper[4725]: I1014 14:19:35.962108 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-mlbgm_08321cb0-edbc-4ee5-8f5e-38084f62802a/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 14 14:19:35 crc kubenswrapper[4725]: I1014 14:19:35.985044 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5kqrn/crc-debug-v2zx8"] Oct 14 14:19:35 crc kubenswrapper[4725]: E1014 14:19:35.985444 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01244874-e49e-4148-88e2-5383c1dfccae" containerName="container-00" Oct 14 14:19:35 crc kubenswrapper[4725]: I1014 14:19:35.985489 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="01244874-e49e-4148-88e2-5383c1dfccae" containerName="container-00" Oct 14 14:19:35 crc kubenswrapper[4725]: I1014 14:19:35.985703 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="01244874-e49e-4148-88e2-5383c1dfccae" containerName="container-00" Oct 14 14:19:35 crc kubenswrapper[4725]: I1014 14:19:35.986388 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5kqrn/crc-debug-v2zx8" Oct 14 14:19:36 crc kubenswrapper[4725]: I1014 14:19:36.108561 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a27c316-357f-4be8-908c-613b061856c1-host\") pod \"crc-debug-v2zx8\" (UID: \"9a27c316-357f-4be8-908c-613b061856c1\") " pod="openshift-must-gather-5kqrn/crc-debug-v2zx8" Oct 14 14:19:36 crc kubenswrapper[4725]: I1014 14:19:36.108712 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr9wz\" (UniqueName: \"kubernetes.io/projected/9a27c316-357f-4be8-908c-613b061856c1-kube-api-access-hr9wz\") pod \"crc-debug-v2zx8\" (UID: \"9a27c316-357f-4be8-908c-613b061856c1\") " pod="openshift-must-gather-5kqrn/crc-debug-v2zx8" Oct 14 14:19:36 crc kubenswrapper[4725]: I1014 14:19:36.210368 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a27c316-357f-4be8-908c-613b061856c1-host\") pod \"crc-debug-v2zx8\" (UID: \"9a27c316-357f-4be8-908c-613b061856c1\") " pod="openshift-must-gather-5kqrn/crc-debug-v2zx8" Oct 14 14:19:36 crc kubenswrapper[4725]: I1014 14:19:36.210446 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr9wz\" (UniqueName: \"kubernetes.io/projected/9a27c316-357f-4be8-908c-613b061856c1-kube-api-access-hr9wz\") pod \"crc-debug-v2zx8\" (UID: \"9a27c316-357f-4be8-908c-613b061856c1\") " pod="openshift-must-gather-5kqrn/crc-debug-v2zx8" Oct 14 14:19:36 crc kubenswrapper[4725]: I1014 14:19:36.210535 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a27c316-357f-4be8-908c-613b061856c1-host\") pod \"crc-debug-v2zx8\" (UID: \"9a27c316-357f-4be8-908c-613b061856c1\") " pod="openshift-must-gather-5kqrn/crc-debug-v2zx8" Oct 14 14:19:36 crc kubenswrapper[4725]: I1014 14:19:36.231222 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr9wz\" (UniqueName: \"kubernetes.io/projected/9a27c316-357f-4be8-908c-613b061856c1-kube-api-access-hr9wz\") pod \"crc-debug-v2zx8\" (UID: \"9a27c316-357f-4be8-908c-613b061856c1\") " pod="openshift-must-gather-5kqrn/crc-debug-v2zx8" Oct 14 14:19:36 crc kubenswrapper[4725]: I1014 14:19:36.302415 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5kqrn/crc-debug-v2zx8" Oct 14 14:19:36 crc kubenswrapper[4725]: I1014 14:19:36.614970 4725 generic.go:334] "Generic (PLEG): container finished" podID="9a27c316-357f-4be8-908c-613b061856c1" containerID="35169051a4ef81385824d62ef6709b42a20cd4ee84665164ca2535438e5a04d6" exitCode=0 Oct 14 14:19:36 crc kubenswrapper[4725]: I1014 14:19:36.615054 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5kqrn/crc-debug-v2zx8" event={"ID":"9a27c316-357f-4be8-908c-613b061856c1","Type":"ContainerDied","Data":"35169051a4ef81385824d62ef6709b42a20cd4ee84665164ca2535438e5a04d6"} Oct 14 14:19:36 crc kubenswrapper[4725]: I1014 14:19:36.617554 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5kqrn/crc-debug-v2zx8" event={"ID":"9a27c316-357f-4be8-908c-613b061856c1","Type":"ContainerStarted","Data":"a283088a6210b6a1e1fe6227cfefc161ad5da073611b917d0abe1988d476498a"} Oct 14 14:19:36 crc kubenswrapper[4725]: I1014 14:19:36.696512 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5kqrn/crc-debug-v2zx8"] Oct 14 14:19:36 crc kubenswrapper[4725]: I1014 14:19:36.708638 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5kqrn/crc-debug-v2zx8"] Oct 14 14:19:37 crc kubenswrapper[4725]: I1014 14:19:37.632027 4725 generic.go:334] "Generic (PLEG): container finished" podID="340c0b5f-4be9-4cc5-a618-62ce89447281" containerID="c56d0d4ba2c67dc0177b6ca8ef3a1daeeafdd5827fc1eb88931ab9020a5b0e0a" exitCode=0 Oct 14 14:19:37 crc kubenswrapper[4725]: I1014 14:19:37.632306 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nm285" event={"ID":"340c0b5f-4be9-4cc5-a618-62ce89447281","Type":"ContainerDied","Data":"c56d0d4ba2c67dc0177b6ca8ef3a1daeeafdd5827fc1eb88931ab9020a5b0e0a"} Oct 14 14:19:37 crc kubenswrapper[4725]: I1014 14:19:37.740633 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5kqrn/crc-debug-v2zx8" Oct 14 14:19:37 crc kubenswrapper[4725]: I1014 14:19:37.837693 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a27c316-357f-4be8-908c-613b061856c1-host\") pod \"9a27c316-357f-4be8-908c-613b061856c1\" (UID: \"9a27c316-357f-4be8-908c-613b061856c1\") " Oct 14 14:19:37 crc kubenswrapper[4725]: I1014 14:19:37.837816 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a27c316-357f-4be8-908c-613b061856c1-host" (OuterVolumeSpecName: "host") pod "9a27c316-357f-4be8-908c-613b061856c1" (UID: "9a27c316-357f-4be8-908c-613b061856c1"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 14 14:19:37 crc kubenswrapper[4725]: I1014 14:19:37.837961 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr9wz\" (UniqueName: \"kubernetes.io/projected/9a27c316-357f-4be8-908c-613b061856c1-kube-api-access-hr9wz\") pod \"9a27c316-357f-4be8-908c-613b061856c1\" (UID: \"9a27c316-357f-4be8-908c-613b061856c1\") " Oct 14 14:19:37 crc kubenswrapper[4725]: I1014 14:19:37.838427 4725 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a27c316-357f-4be8-908c-613b061856c1-host\") on node \"crc\" DevicePath \"\"" Oct 14 14:19:37 crc kubenswrapper[4725]: I1014 14:19:37.884665 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a27c316-357f-4be8-908c-613b061856c1-kube-api-access-hr9wz" (OuterVolumeSpecName: "kube-api-access-hr9wz") pod "9a27c316-357f-4be8-908c-613b061856c1" (UID: "9a27c316-357f-4be8-908c-613b061856c1"). InnerVolumeSpecName "kube-api-access-hr9wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:19:37 crc kubenswrapper[4725]: I1014 14:19:37.935368 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a27c316-357f-4be8-908c-613b061856c1" path="/var/lib/kubelet/pods/9a27c316-357f-4be8-908c-613b061856c1/volumes" Oct 14 14:19:37 crc kubenswrapper[4725]: I1014 14:19:37.939760 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr9wz\" (UniqueName: \"kubernetes.io/projected/9a27c316-357f-4be8-908c-613b061856c1-kube-api-access-hr9wz\") on node \"crc\" DevicePath \"\"" Oct 14 14:19:38 crc kubenswrapper[4725]: I1014 14:19:38.642405 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nm285" event={"ID":"340c0b5f-4be9-4cc5-a618-62ce89447281","Type":"ContainerStarted","Data":"3466c613e9884e3e84032e2f1b36e34328fe09f0c1dccd8c39ddd61b8b5249ac"} Oct 14 14:19:38 crc kubenswrapper[4725]: I1014 14:19:38.644572 4725 scope.go:117] "RemoveContainer" containerID="35169051a4ef81385824d62ef6709b42a20cd4ee84665164ca2535438e5a04d6" Oct 14 14:19:38 crc kubenswrapper[4725]: I1014 14:19:38.644690 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5kqrn/crc-debug-v2zx8" Oct 14 14:19:38 crc kubenswrapper[4725]: I1014 14:19:38.698765 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nm285" podStartSLOduration=2.240291102 podStartE2EDuration="4.698744602s" podCreationTimestamp="2025-10-14 14:19:34 +0000 UTC" firstStartedPulling="2025-10-14 14:19:35.607038986 +0000 UTC m=+3892.455473795" lastFinishedPulling="2025-10-14 14:19:38.065492486 +0000 UTC m=+3894.913927295" observedRunningTime="2025-10-14 14:19:38.680915629 +0000 UTC m=+3895.529350448" watchObservedRunningTime="2025-10-14 14:19:38.698744602 +0000 UTC m=+3895.547179411" Oct 14 14:19:40 crc kubenswrapper[4725]: I1014 14:19:40.922359 4725 scope.go:117] "RemoveContainer" containerID="1e2cf79492a165ced9cc9a27ff00cb3288ac5a0abfa5f696ffa2da70ed76d129" Oct 14 14:19:40 crc kubenswrapper[4725]: E1014 14:19:40.923795 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:19:42 crc kubenswrapper[4725]: I1014 14:19:42.932323 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_166bc2c3-283f-4c1d-815b-54fffa8192d5/memcached/0.log" Oct 14 14:19:44 crc kubenswrapper[4725]: I1014 14:19:44.634912 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nm285" Oct 14 14:19:44 crc kubenswrapper[4725]: I1014 14:19:44.634985 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nm285" Oct 14 14:19:44 crc kubenswrapper[4725]: I1014 14:19:44.685610 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nm285" Oct 14 14:19:44 crc kubenswrapper[4725]: I1014 14:19:44.750226 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nm285" Oct 14 14:19:44 crc kubenswrapper[4725]: I1014 14:19:44.920514 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nm285"] Oct 14 14:19:46 crc kubenswrapper[4725]: I1014 14:19:46.711608 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nm285" podUID="340c0b5f-4be9-4cc5-a618-62ce89447281" containerName="registry-server" containerID="cri-o://3466c613e9884e3e84032e2f1b36e34328fe09f0c1dccd8c39ddd61b8b5249ac" gracePeriod=2 Oct 14 14:19:47 crc kubenswrapper[4725]: I1014 14:19:47.212063 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nm285" Oct 14 14:19:47 crc kubenswrapper[4725]: I1014 14:19:47.332796 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/340c0b5f-4be9-4cc5-a618-62ce89447281-catalog-content\") pod \"340c0b5f-4be9-4cc5-a618-62ce89447281\" (UID: \"340c0b5f-4be9-4cc5-a618-62ce89447281\") " Oct 14 14:19:47 crc kubenswrapper[4725]: I1014 14:19:47.333016 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvtpr\" (UniqueName: \"kubernetes.io/projected/340c0b5f-4be9-4cc5-a618-62ce89447281-kube-api-access-fvtpr\") pod \"340c0b5f-4be9-4cc5-a618-62ce89447281\" (UID: \"340c0b5f-4be9-4cc5-a618-62ce89447281\") " Oct 14 14:19:47 crc kubenswrapper[4725]: I1014 14:19:47.333055 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/340c0b5f-4be9-4cc5-a618-62ce89447281-utilities\") pod \"340c0b5f-4be9-4cc5-a618-62ce89447281\" (UID: \"340c0b5f-4be9-4cc5-a618-62ce89447281\") " Oct 14 14:19:47 crc kubenswrapper[4725]: I1014 14:19:47.334525 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/340c0b5f-4be9-4cc5-a618-62ce89447281-utilities" (OuterVolumeSpecName: "utilities") pod "340c0b5f-4be9-4cc5-a618-62ce89447281" (UID: "340c0b5f-4be9-4cc5-a618-62ce89447281"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:19:47 crc kubenswrapper[4725]: I1014 14:19:47.340216 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/340c0b5f-4be9-4cc5-a618-62ce89447281-kube-api-access-fvtpr" (OuterVolumeSpecName: "kube-api-access-fvtpr") pod "340c0b5f-4be9-4cc5-a618-62ce89447281" (UID: "340c0b5f-4be9-4cc5-a618-62ce89447281"). InnerVolumeSpecName "kube-api-access-fvtpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:19:47 crc kubenswrapper[4725]: I1014 14:19:47.350681 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/340c0b5f-4be9-4cc5-a618-62ce89447281-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "340c0b5f-4be9-4cc5-a618-62ce89447281" (UID: "340c0b5f-4be9-4cc5-a618-62ce89447281"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:19:47 crc kubenswrapper[4725]: I1014 14:19:47.439377 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/340c0b5f-4be9-4cc5-a618-62ce89447281-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 14:19:47 crc kubenswrapper[4725]: I1014 14:19:47.439423 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvtpr\" (UniqueName: \"kubernetes.io/projected/340c0b5f-4be9-4cc5-a618-62ce89447281-kube-api-access-fvtpr\") on node \"crc\" DevicePath \"\"" Oct 14 14:19:47 crc kubenswrapper[4725]: I1014 14:19:47.439434 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/340c0b5f-4be9-4cc5-a618-62ce89447281-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 14:19:47 crc kubenswrapper[4725]: I1014 14:19:47.723392 4725 generic.go:334] "Generic (PLEG): container finished" podID="340c0b5f-4be9-4cc5-a618-62ce89447281" containerID="3466c613e9884e3e84032e2f1b36e34328fe09f0c1dccd8c39ddd61b8b5249ac" exitCode=0 Oct 14 14:19:47 crc kubenswrapper[4725]: I1014 14:19:47.723486 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nm285" Oct 14 14:19:47 crc kubenswrapper[4725]: I1014 14:19:47.723478 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nm285" event={"ID":"340c0b5f-4be9-4cc5-a618-62ce89447281","Type":"ContainerDied","Data":"3466c613e9884e3e84032e2f1b36e34328fe09f0c1dccd8c39ddd61b8b5249ac"} Oct 14 14:19:47 crc kubenswrapper[4725]: I1014 14:19:47.723595 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nm285" event={"ID":"340c0b5f-4be9-4cc5-a618-62ce89447281","Type":"ContainerDied","Data":"d0d3ba8975ae760013ce5c17160b9103adc07ec81a31a2cdc3dc3830ffd39099"} Oct 14 14:19:47 crc kubenswrapper[4725]: I1014 14:19:47.723641 4725 scope.go:117] "RemoveContainer" containerID="3466c613e9884e3e84032e2f1b36e34328fe09f0c1dccd8c39ddd61b8b5249ac" Oct 14 14:19:47 crc kubenswrapper[4725]: I1014 14:19:47.741549 4725 scope.go:117] "RemoveContainer" containerID="c56d0d4ba2c67dc0177b6ca8ef3a1daeeafdd5827fc1eb88931ab9020a5b0e0a" Oct 14 14:19:47 crc kubenswrapper[4725]: I1014 14:19:47.769070 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nm285"] Oct 14 14:19:47 crc kubenswrapper[4725]: I1014 14:19:47.770149 4725 scope.go:117] "RemoveContainer" containerID="531424d61ac91291c3f744c19cf4802abafb5f8af63e7d1ebd1e0c5788d2a341" Oct 14 14:19:47 crc kubenswrapper[4725]: I1014 14:19:47.777287 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nm285"] Oct 14 14:19:47 crc kubenswrapper[4725]: I1014 14:19:47.823414 4725 scope.go:117] "RemoveContainer" containerID="3466c613e9884e3e84032e2f1b36e34328fe09f0c1dccd8c39ddd61b8b5249ac" Oct 14 14:19:47 crc kubenswrapper[4725]: E1014 14:19:47.823958 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3466c613e9884e3e84032e2f1b36e34328fe09f0c1dccd8c39ddd61b8b5249ac\": container with ID starting with 3466c613e9884e3e84032e2f1b36e34328fe09f0c1dccd8c39ddd61b8b5249ac not found: ID does not exist" containerID="3466c613e9884e3e84032e2f1b36e34328fe09f0c1dccd8c39ddd61b8b5249ac" Oct 14 14:19:47 crc kubenswrapper[4725]: I1014 14:19:47.823995 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3466c613e9884e3e84032e2f1b36e34328fe09f0c1dccd8c39ddd61b8b5249ac"} err="failed to get container status \"3466c613e9884e3e84032e2f1b36e34328fe09f0c1dccd8c39ddd61b8b5249ac\": rpc error: code = NotFound desc = could not find container \"3466c613e9884e3e84032e2f1b36e34328fe09f0c1dccd8c39ddd61b8b5249ac\": container with ID starting with 3466c613e9884e3e84032e2f1b36e34328fe09f0c1dccd8c39ddd61b8b5249ac not found: ID does not exist" Oct 14 14:19:47 crc kubenswrapper[4725]: I1014 14:19:47.824021 4725 scope.go:117] "RemoveContainer" containerID="c56d0d4ba2c67dc0177b6ca8ef3a1daeeafdd5827fc1eb88931ab9020a5b0e0a" Oct 14 14:19:47 crc kubenswrapper[4725]: E1014 14:19:47.824361 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c56d0d4ba2c67dc0177b6ca8ef3a1daeeafdd5827fc1eb88931ab9020a5b0e0a\": container with ID starting with c56d0d4ba2c67dc0177b6ca8ef3a1daeeafdd5827fc1eb88931ab9020a5b0e0a not found: ID does not exist" containerID="c56d0d4ba2c67dc0177b6ca8ef3a1daeeafdd5827fc1eb88931ab9020a5b0e0a" Oct 14 14:19:47 crc kubenswrapper[4725]: I1014 14:19:47.824386 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c56d0d4ba2c67dc0177b6ca8ef3a1daeeafdd5827fc1eb88931ab9020a5b0e0a"} err="failed to get container status \"c56d0d4ba2c67dc0177b6ca8ef3a1daeeafdd5827fc1eb88931ab9020a5b0e0a\": rpc error: code = NotFound desc = could not find container \"c56d0d4ba2c67dc0177b6ca8ef3a1daeeafdd5827fc1eb88931ab9020a5b0e0a\": container with ID starting with c56d0d4ba2c67dc0177b6ca8ef3a1daeeafdd5827fc1eb88931ab9020a5b0e0a not found: ID does not exist" Oct 14 14:19:47 crc kubenswrapper[4725]: I1014 14:19:47.824403 4725 scope.go:117] "RemoveContainer" containerID="531424d61ac91291c3f744c19cf4802abafb5f8af63e7d1ebd1e0c5788d2a341" Oct 14 14:19:47 crc kubenswrapper[4725]: E1014 14:19:47.824832 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"531424d61ac91291c3f744c19cf4802abafb5f8af63e7d1ebd1e0c5788d2a341\": container with ID starting with 531424d61ac91291c3f744c19cf4802abafb5f8af63e7d1ebd1e0c5788d2a341 not found: ID does not exist" containerID="531424d61ac91291c3f744c19cf4802abafb5f8af63e7d1ebd1e0c5788d2a341" Oct 14 14:19:47 crc kubenswrapper[4725]: I1014 14:19:47.824860 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"531424d61ac91291c3f744c19cf4802abafb5f8af63e7d1ebd1e0c5788d2a341"} err="failed to get container status \"531424d61ac91291c3f744c19cf4802abafb5f8af63e7d1ebd1e0c5788d2a341\": rpc error: code = NotFound desc = could not find container \"531424d61ac91291c3f744c19cf4802abafb5f8af63e7d1ebd1e0c5788d2a341\": container with ID starting with 531424d61ac91291c3f744c19cf4802abafb5f8af63e7d1ebd1e0c5788d2a341 not found: ID does not exist" Oct 14 14:19:47 crc kubenswrapper[4725]: I1014 14:19:47.935754 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="340c0b5f-4be9-4cc5-a618-62ce89447281" path="/var/lib/kubelet/pods/340c0b5f-4be9-4cc5-a618-62ce89447281/volumes" Oct 14 14:19:53 crc kubenswrapper[4725]: I1014 14:19:53.929988 4725 scope.go:117] "RemoveContainer" containerID="1e2cf79492a165ced9cc9a27ff00cb3288ac5a0abfa5f696ffa2da70ed76d129" Oct 14 14:19:53 crc kubenswrapper[4725]: E1014 14:19:53.930752 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:19:59 crc kubenswrapper[4725]: I1014 14:19:59.792267 4725 scope.go:117] "RemoveContainer" containerID="3fc2db4314f5110e6163825db8a8cc51adfa364d46e3b9a6da1deb9d4fb46fd9" Oct 14 14:20:00 crc kubenswrapper[4725]: I1014 14:20:00.000044 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs_61036e72-6020-469f-9406-d49df28fbd36/util/0.log" Oct 14 14:20:00 crc kubenswrapper[4725]: I1014 14:20:00.189345 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs_61036e72-6020-469f-9406-d49df28fbd36/util/0.log" Oct 14 14:20:00 crc kubenswrapper[4725]: I1014 14:20:00.205056 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs_61036e72-6020-469f-9406-d49df28fbd36/pull/0.log" Oct 14 14:20:00 crc kubenswrapper[4725]: I1014 14:20:00.233044 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs_61036e72-6020-469f-9406-d49df28fbd36/pull/0.log" Oct 14 14:20:00 crc kubenswrapper[4725]: I1014 14:20:00.392778 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs_61036e72-6020-469f-9406-d49df28fbd36/util/0.log" Oct 14 14:20:00 crc kubenswrapper[4725]: I1014 14:20:00.415699 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs_61036e72-6020-469f-9406-d49df28fbd36/pull/0.log" Oct 14 14:20:00 crc kubenswrapper[4725]: I1014 14:20:00.456103 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_98d1e881be4ee8680d11b9a5631544c7dac60e779fa4b1480aa7108c3f4xjzs_61036e72-6020-469f-9406-d49df28fbd36/extract/0.log" Oct 14 14:20:00 crc kubenswrapper[4725]: I1014 14:20:00.638906 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-64f84fcdbb-wnx2r_ea9bba90-8416-4b43-a2be-d41f635db481/kube-rbac-proxy/0.log" Oct 14 14:20:00 crc kubenswrapper[4725]: I1014 14:20:00.685438 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-64f84fcdbb-wnx2r_ea9bba90-8416-4b43-a2be-d41f635db481/manager/0.log" Oct 14 14:20:00 crc kubenswrapper[4725]: I1014 14:20:00.706110 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-59cdc64769-hbgnj_f8ae62f1-e80d-4f8c-81c3-0c5c50338046/kube-rbac-proxy/0.log" Oct 14 14:20:00 crc kubenswrapper[4725]: I1014 14:20:00.868978 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-59cdc64769-hbgnj_f8ae62f1-e80d-4f8c-81c3-0c5c50338046/manager/0.log" Oct 14 14:20:00 crc kubenswrapper[4725]: I1014 14:20:00.908078 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-687df44cdb-h8bkv_40be9ede-1b5e-4022-9626-8367074f88c1/kube-rbac-proxy/0.log" Oct 14 14:20:00 crc kubenswrapper[4725]: I1014 14:20:00.917108 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-687df44cdb-h8bkv_40be9ede-1b5e-4022-9626-8367074f88c1/manager/0.log" Oct 14 14:20:01 crc kubenswrapper[4725]: I1014 14:20:01.271445 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7bb46cd7d-pjkj4_da9db202-78ec-4df7-9ead-374e287391a2/kube-rbac-proxy/0.log" Oct 14 14:20:01 crc kubenswrapper[4725]: I1014 14:20:01.342257 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7bb46cd7d-pjkj4_da9db202-78ec-4df7-9ead-374e287391a2/manager/0.log" Oct 14 14:20:01 crc kubenswrapper[4725]: I1014 14:20:01.479928 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-6d9967f8dd-w9x95_8dc80d7a-d38e-4baa-85ae-fa856c39b48f/kube-rbac-proxy/0.log" Oct 14 14:20:01 crc kubenswrapper[4725]: I1014 14:20:01.546834 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d74794d9b-96vkx_f0b42b9f-7713-48ca-b148-c42b5d2006f3/kube-rbac-proxy/0.log" Oct 14 14:20:01 crc kubenswrapper[4725]: I1014 14:20:01.553674 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-6d9967f8dd-w9x95_8dc80d7a-d38e-4baa-85ae-fa856c39b48f/manager/0.log" Oct 14 14:20:01 crc kubenswrapper[4725]: I1014 14:20:01.710982 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d74794d9b-96vkx_f0b42b9f-7713-48ca-b148-c42b5d2006f3/manager/0.log" Oct 14 14:20:01 crc kubenswrapper[4725]: I1014 14:20:01.759096 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-585fc5b659-jgbh4_3316f5a0-820b-45ec-802a-dc3203f1d9fa/kube-rbac-proxy/0.log" Oct 14 14:20:01 crc kubenswrapper[4725]: I1014 14:20:01.924392 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-585fc5b659-jgbh4_3316f5a0-820b-45ec-802a-dc3203f1d9fa/manager/0.log" Oct 14 14:20:01 crc kubenswrapper[4725]: I1014 14:20:01.967471 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-74cb5cbc49-tt2xm_64f86a1d-32a0-4133-94c3-b59ab14a0d4e/kube-rbac-proxy/0.log" Oct 14 14:20:01 crc kubenswrapper[4725]: I1014 14:20:01.970627 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-74cb5cbc49-tt2xm_64f86a1d-32a0-4133-94c3-b59ab14a0d4e/manager/0.log" Oct 14 14:20:02 crc kubenswrapper[4725]: I1014 14:20:02.156178 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-ddb98f99b-8jwrn_95db1b3c-9877-4e8f-b756-532ccfd5db7a/kube-rbac-proxy/0.log" Oct 14 14:20:02 crc kubenswrapper[4725]: I1014 14:20:02.240801 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-ddb98f99b-8jwrn_95db1b3c-9877-4e8f-b756-532ccfd5db7a/manager/0.log" Oct 14 14:20:02 crc kubenswrapper[4725]: I1014 14:20:02.271890 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-59578bc799-4jwx9_0ba97bf9-2b5b-495b-99e2-328a987535e1/kube-rbac-proxy/0.log" Oct 14 14:20:02 crc kubenswrapper[4725]: I1014 14:20:02.345069 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-59578bc799-4jwx9_0ba97bf9-2b5b-495b-99e2-328a987535e1/manager/0.log" Oct 14 14:20:02 crc kubenswrapper[4725]: I1014 14:20:02.407219 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-5777b4f897-9q25r_960fbcd9-1563-4f84-89ac-53694a2413de/kube-rbac-proxy/0.log" Oct 14 14:20:02 crc kubenswrapper[4725]: I1014 14:20:02.471887 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-5777b4f897-9q25r_960fbcd9-1563-4f84-89ac-53694a2413de/manager/0.log" Oct 14 14:20:02 crc kubenswrapper[4725]: I1014 14:20:02.596810 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-797d478b46-55m5j_15e9c123-7b58-49ab-b7d7-f429e6b15c1e/kube-rbac-proxy/0.log" Oct 14 14:20:02 crc kubenswrapper[4725]: I1014 14:20:02.661031 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-797d478b46-55m5j_15e9c123-7b58-49ab-b7d7-f429e6b15c1e/manager/0.log" Oct 14 14:20:02 crc kubenswrapper[4725]: I1014 14:20:02.733433 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-57bb74c7bf-mvqp4_720eb902-cb6c-4d90-b25b-eae97e6d055d/kube-rbac-proxy/0.log" Oct 14 14:20:02 crc kubenswrapper[4725]: I1014 14:20:02.870925 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-57bb74c7bf-mvqp4_720eb902-cb6c-4d90-b25b-eae97e6d055d/manager/0.log" Oct 14 14:20:02 crc kubenswrapper[4725]: I1014 14:20:02.872827 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6d7c7ddf95-d258r_e4436f5f-2c38-49ee-8e53-061e294f009f/kube-rbac-proxy/0.log" Oct 14 14:20:02 crc kubenswrapper[4725]: I1014 14:20:02.944824 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6d7c7ddf95-d258r_e4436f5f-2c38-49ee-8e53-061e294f009f/manager/0.log" Oct 14 14:20:03 crc kubenswrapper[4725]: I1014 14:20:03.074601 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6cc7fb757d5s454_1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec/manager/0.log" Oct 14 14:20:03 crc kubenswrapper[4725]: I1014 14:20:03.103431 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6cc7fb757d5s454_1ee65fef-639a-4cff-8e7e-b68f7dd3a4ec/kube-rbac-proxy/0.log" Oct 14 14:20:03 crc kubenswrapper[4725]: I1014 14:20:03.218345 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7cff5c958-wscp6_2627f8df-e54f-45e7-862a-fcacad250f2a/kube-rbac-proxy/0.log" Oct 14 14:20:03 crc kubenswrapper[4725]: I1014 14:20:03.339419 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-577669444d-64dvz_859cc4b3-36d4-43aa-8780-12ffdfef67e1/kube-rbac-proxy/0.log" Oct 14 14:20:03 crc kubenswrapper[4725]: I1014 14:20:03.520727 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-v8nhf_4473a60c-84a6-440a-ad02-ed3331345270/registry-server/0.log" Oct 14 14:20:03 crc kubenswrapper[4725]: I1014 14:20:03.543111 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-577669444d-64dvz_859cc4b3-36d4-43aa-8780-12ffdfef67e1/operator/0.log" Oct 14 14:20:03 crc kubenswrapper[4725]: I1014 14:20:03.714316 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-869cc7797f-fnf6m_28bb98a9-5146-48a5-8f4f-a3a7766ab18c/kube-rbac-proxy/0.log" Oct 14 14:20:03 crc kubenswrapper[4725]: I1014 14:20:03.791819 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-664664cb68-b462h_420f32f0-6ea7-4489-8165-215def02acf1/kube-rbac-proxy/0.log" Oct 14 14:20:03 crc kubenswrapper[4725]: I1014 14:20:03.899729 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-869cc7797f-fnf6m_28bb98a9-5146-48a5-8f4f-a3a7766ab18c/manager/0.log" Oct 14 14:20:03 crc kubenswrapper[4725]: I1014 14:20:03.983298 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-664664cb68-b462h_420f32f0-6ea7-4489-8165-215def02acf1/manager/0.log" Oct 14 14:20:04 crc kubenswrapper[4725]: I1014 14:20:04.067203 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-tsxsk_3a364ec5-2c66-4d8d-8f51-b3b4357e7b67/operator/0.log" Oct 14 14:20:04 crc kubenswrapper[4725]: I1014 14:20:04.206010 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f4d5dfdc6-vmdhg_3d451d86-1531-4612-a2c8-5be10e09f890/kube-rbac-proxy/0.log" Oct 14 14:20:04 crc kubenswrapper[4725]: I1014 14:20:04.257557 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f4d5dfdc6-vmdhg_3d451d86-1531-4612-a2c8-5be10e09f890/manager/0.log" Oct 14 14:20:04 crc kubenswrapper[4725]: I1014 14:20:04.353281 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-578874c84d-878xx_fb53b295-5121-4b18-9d3d-e3e981a64f2d/kube-rbac-proxy/0.log" Oct 14 14:20:04 crc kubenswrapper[4725]: I1014 14:20:04.365429 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7cff5c958-wscp6_2627f8df-e54f-45e7-862a-fcacad250f2a/manager/0.log" Oct 14 14:20:04 crc kubenswrapper[4725]: I1014 14:20:04.483136 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-ffcdd6c94-nhdz7_cb31097e-d603-4f4d-8cc1-a5f1a841ea00/kube-rbac-proxy/0.log" Oct 14 14:20:04 crc kubenswrapper[4725]: I1014 14:20:04.495231 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-578874c84d-878xx_fb53b295-5121-4b18-9d3d-e3e981a64f2d/manager/0.log" Oct 14 14:20:04 crc kubenswrapper[4725]: I1014 14:20:04.584143 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-ffcdd6c94-nhdz7_cb31097e-d603-4f4d-8cc1-a5f1a841ea00/manager/0.log" Oct 14 14:20:04 crc kubenswrapper[4725]: I1014 14:20:04.654077 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-646675d848-l76vk_50605e76-3c4b-480c-9a76-e84962f38851/kube-rbac-proxy/0.log" Oct 14 14:20:04 crc kubenswrapper[4725]: I1014 14:20:04.678523 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-646675d848-l76vk_50605e76-3c4b-480c-9a76-e84962f38851/manager/0.log" Oct 14 14:20:07 crc kubenswrapper[4725]: I1014 14:20:07.922997 4725 scope.go:117] "RemoveContainer" containerID="1e2cf79492a165ced9cc9a27ff00cb3288ac5a0abfa5f696ffa2da70ed76d129" Oct 14 14:20:07 crc kubenswrapper[4725]: E1014 14:20:07.923617 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:20:19 crc kubenswrapper[4725]: I1014 14:20:19.470471 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-658kg_10813d7e-3ed3-49a7-a2ad-5aa0db76a25d/control-plane-machine-set-operator/0.log" Oct 14 14:20:19 crc kubenswrapper[4725]: I1014 14:20:19.680355 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-r985j_1abee1b1-8c1e-43df-89cc-5381a2ef0fc6/kube-rbac-proxy/0.log" Oct 14 14:20:19 crc kubenswrapper[4725]: I1014 14:20:19.684140 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-r985j_1abee1b1-8c1e-43df-89cc-5381a2ef0fc6/machine-api-operator/0.log" Oct 14 14:20:21 crc kubenswrapper[4725]: I1014 14:20:21.921125 4725 scope.go:117] "RemoveContainer" containerID="1e2cf79492a165ced9cc9a27ff00cb3288ac5a0abfa5f696ffa2da70ed76d129" Oct 14 14:20:21 crc kubenswrapper[4725]: E1014 14:20:21.921653 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-t9hh9_openshift-machine-config-operator(ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c)\"" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" Oct 14 14:20:31 crc kubenswrapper[4725]: I1014 14:20:31.280319 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-kpptr_39534dc6-c413-407f-a4d4-1d129d0dcdf3/cert-manager-controller/0.log" Oct 14 14:20:31 crc kubenswrapper[4725]: I1014 14:20:31.452669 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-s9g2p_f6145f04-9a33-4d9b-9158-7f6fd9bf38d3/cert-manager-cainjector/0.log" Oct 14 14:20:31 crc kubenswrapper[4725]: I1014 14:20:31.511407 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-jjlbv_65c2f894-94a7-4e13-b4f9-16bc3a11921b/cert-manager-webhook/0.log" Oct 14 14:20:34 crc kubenswrapper[4725]: I1014 14:20:34.921798 4725 scope.go:117] "RemoveContainer" containerID="1e2cf79492a165ced9cc9a27ff00cb3288ac5a0abfa5f696ffa2da70ed76d129" Oct 14 14:20:35 crc kubenswrapper[4725]: I1014 14:20:35.164967 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" event={"ID":"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c","Type":"ContainerStarted","Data":"c7591d7dd7bc03041518beda268ac23dc385eeb1863fae61d197fbff66163022"} Oct 14 14:20:43 crc kubenswrapper[4725]: I1014 14:20:43.709195 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6b874cbd85-nwf94_5ed299ff-7f75-4376-a446-2f24b1d1e539/nmstate-console-plugin/0.log" Oct 14 14:20:43 crc kubenswrapper[4725]: I1014 14:20:43.867093 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-5j272_3f7ba899-2a43-4866-b3d7-34b6ca02b7e4/nmstate-handler/0.log" Oct 14 14:20:43 crc kubenswrapper[4725]: I1014 14:20:43.949511 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-zxznn_42b236b2-dcbb-4c0c-8916-1eba5e90f301/nmstate-metrics/0.log" Oct 14 14:20:43 crc kubenswrapper[4725]: I1014 14:20:43.950945 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-zxznn_42b236b2-dcbb-4c0c-8916-1eba5e90f301/kube-rbac-proxy/0.log" Oct 14 14:20:44 crc kubenswrapper[4725]: I1014 14:20:44.065536 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-858ddd8f98-kgnhm_f4b6e00a-85f8-4036-abc1-c53043f84612/nmstate-operator/0.log" Oct 14 14:20:44 crc kubenswrapper[4725]: I1014 14:20:44.120303 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6cdbc54649-bvg5g_9a636636-e68e-4e0f-ac55-b64f6e886b0e/nmstate-webhook/0.log" Oct 14 14:20:57 crc kubenswrapper[4725]: I1014 14:20:57.662818 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-cs2cd_bdee3f99-134e-4020-b9a6-fdc4c66081eb/kube-rbac-proxy/0.log" Oct 14 14:20:57 crc kubenswrapper[4725]: I1014 14:20:57.797371 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-cs2cd_bdee3f99-134e-4020-b9a6-fdc4c66081eb/controller/0.log" Oct 14 14:20:57 crc kubenswrapper[4725]: I1014 14:20:57.885939 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/cp-frr-files/0.log" Oct 14 14:20:58 crc kubenswrapper[4725]: I1014 14:20:58.081243 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/cp-frr-files/0.log" Oct 14 14:20:58 crc kubenswrapper[4725]: I1014 14:20:58.089843 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/cp-reloader/0.log" Oct 14 14:20:58 crc kubenswrapper[4725]: I1014 14:20:58.092625 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/cp-reloader/0.log" Oct 14 14:20:58 crc kubenswrapper[4725]: I1014 14:20:58.095265 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/cp-metrics/0.log" Oct 14 14:20:58 crc kubenswrapper[4725]: I1014 14:20:58.277144 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/cp-metrics/0.log" Oct 14 14:20:58 crc kubenswrapper[4725]: I1014 14:20:58.291584 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/cp-frr-files/0.log" Oct 14 14:20:58 crc kubenswrapper[4725]: I1014 14:20:58.298062 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/cp-reloader/0.log" Oct 14 14:20:58 crc kubenswrapper[4725]: I1014 14:20:58.298062 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/cp-metrics/0.log" Oct 14 14:20:58 crc kubenswrapper[4725]: I1014 14:20:58.490399 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/cp-frr-files/0.log" Oct 14 14:20:58 crc kubenswrapper[4725]: I1014 14:20:58.497100 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/cp-metrics/0.log" Oct 14 14:20:58 crc kubenswrapper[4725]: I1014 14:20:58.505539 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/cp-reloader/0.log" Oct 14 14:20:58 crc kubenswrapper[4725]: I1014 14:20:58.506589 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/controller/0.log" Oct 14 14:20:58 crc kubenswrapper[4725]: I1014 14:20:58.642731 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/frr-metrics/0.log" Oct 14 14:20:58 crc kubenswrapper[4725]: I1014 14:20:58.666045 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/kube-rbac-proxy/0.log" Oct 14 14:20:58 crc kubenswrapper[4725]: I1014 14:20:58.670100 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/kube-rbac-proxy-frr/0.log" Oct 14 14:20:58 crc kubenswrapper[4725]: I1014 14:20:58.861348 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/reloader/0.log" Oct 14 14:20:58 crc kubenswrapper[4725]: I1014 14:20:58.897837 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-64bf5d555-7xnfb_9c551c0f-3df3-4ba6-8bbb-4d996cad9d45/frr-k8s-webhook-server/0.log" Oct 14 14:20:59 crc kubenswrapper[4725]: I1014 14:20:59.140622 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5c787f6f6d-k5ls2_9f68d3de-1952-4351-9955-742c297861c5/manager/0.log" Oct 14 14:20:59 crc kubenswrapper[4725]: I1014 14:20:59.257734 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-95c4f899b-qzjhp_5c72d8f5-e412-465e-9f73-597f96b57392/webhook-server/0.log" Oct 14 14:20:59 crc kubenswrapper[4725]: I1014 14:20:59.365016 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2dxqc_8f68d749-82ff-45ee-b658-2324015012f7/kube-rbac-proxy/0.log" Oct 14 14:20:59 crc kubenswrapper[4725]: I1014 14:20:59.953985 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2dxqc_8f68d749-82ff-45ee-b658-2324015012f7/speaker/0.log" Oct 14 14:21:00 crc kubenswrapper[4725]: I1014 14:21:00.146755 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pmr8r_0aaf5c0e-3673-4bfa-a046-feed6a0121d7/frr/0.log" Oct 14 14:21:11 crc kubenswrapper[4725]: I1014 14:21:11.651679 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg_f11c88bf-9dff-4a1e-825d-bcaae865c70d/util/0.log" Oct 14 14:21:11 crc kubenswrapper[4725]: I1014 14:21:11.791762 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg_f11c88bf-9dff-4a1e-825d-bcaae865c70d/util/0.log" Oct 14 14:21:11 crc kubenswrapper[4725]: I1014 14:21:11.792260 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg_f11c88bf-9dff-4a1e-825d-bcaae865c70d/pull/0.log" Oct 14 14:21:11 crc kubenswrapper[4725]: I1014 14:21:11.835537 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg_f11c88bf-9dff-4a1e-825d-bcaae865c70d/pull/0.log" Oct 14 14:21:12 crc kubenswrapper[4725]: I1014 14:21:12.026989 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg_f11c88bf-9dff-4a1e-825d-bcaae865c70d/extract/0.log" Oct 14 14:21:12 crc kubenswrapper[4725]: I1014 14:21:12.090758 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg_f11c88bf-9dff-4a1e-825d-bcaae865c70d/util/0.log" Oct 14 14:21:12 crc kubenswrapper[4725]: I1014 14:21:12.127540 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2bnmkg_f11c88bf-9dff-4a1e-825d-bcaae865c70d/pull/0.log" Oct 14 14:21:12 crc kubenswrapper[4725]: I1014 14:21:12.236763 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rjr78_08e081de-8c18-4fc2-8b9b-844352989e96/extract-utilities/0.log" Oct 14 14:21:12 crc kubenswrapper[4725]: I1014 14:21:12.411249 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rjr78_08e081de-8c18-4fc2-8b9b-844352989e96/extract-content/0.log" Oct 14 14:21:12 crc kubenswrapper[4725]: I1014 14:21:12.423206 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rjr78_08e081de-8c18-4fc2-8b9b-844352989e96/extract-content/0.log" Oct 14 14:21:12 crc kubenswrapper[4725]: I1014 14:21:12.426927 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rjr78_08e081de-8c18-4fc2-8b9b-844352989e96/extract-utilities/0.log" Oct 14 14:21:12 crc kubenswrapper[4725]: I1014 14:21:12.658127 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rjr78_08e081de-8c18-4fc2-8b9b-844352989e96/extract-utilities/0.log" Oct 14 14:21:12 crc kubenswrapper[4725]: I1014 14:21:12.672086 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rjr78_08e081de-8c18-4fc2-8b9b-844352989e96/extract-content/0.log" Oct 14 14:21:12 crc kubenswrapper[4725]: I1014 14:21:12.864971 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-chnk6_48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9/extract-utilities/0.log" Oct 14 14:21:13 crc kubenswrapper[4725]: I1014 14:21:13.048723 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-chnk6_48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9/extract-utilities/0.log" Oct 14 14:21:13 crc kubenswrapper[4725]: I1014 14:21:13.092552 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rjr78_08e081de-8c18-4fc2-8b9b-844352989e96/registry-server/0.log" Oct 14 14:21:13 crc kubenswrapper[4725]: I1014 14:21:13.097763 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-chnk6_48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9/extract-content/0.log" Oct 14 14:21:13 crc kubenswrapper[4725]: I1014 14:21:13.111942 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-chnk6_48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9/extract-content/0.log" Oct 14 14:21:13 crc kubenswrapper[4725]: I1014 14:21:13.304050 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-chnk6_48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9/extract-utilities/0.log" Oct 14 14:21:13 crc kubenswrapper[4725]: I1014 14:21:13.329280 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-chnk6_48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9/extract-content/0.log" Oct 14 14:21:13 crc kubenswrapper[4725]: I1014 14:21:13.500638 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb_d0546f96-2a09-4b8f-9f0d-33615b2a71b8/util/0.log" Oct 14 14:21:13 crc kubenswrapper[4725]: I1014 14:21:13.771783 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb_d0546f96-2a09-4b8f-9f0d-33615b2a71b8/pull/0.log" Oct 14 14:21:13 crc kubenswrapper[4725]: I1014 14:21:13.860013 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb_d0546f96-2a09-4b8f-9f0d-33615b2a71b8/util/0.log" Oct 14 14:21:13 crc kubenswrapper[4725]: I1014 14:21:13.864420 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb_d0546f96-2a09-4b8f-9f0d-33615b2a71b8/pull/0.log" Oct 14 14:21:14 crc kubenswrapper[4725]: I1014 14:21:14.017017 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-chnk6_48c6e1e3-4851-4faf-bcb0-d12f5c73f6a9/registry-server/0.log" Oct 14 14:21:14 crc kubenswrapper[4725]: I1014 14:21:14.052216 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb_d0546f96-2a09-4b8f-9f0d-33615b2a71b8/util/0.log" Oct 14 14:21:14 crc kubenswrapper[4725]: I1014 14:21:14.090893 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb_d0546f96-2a09-4b8f-9f0d-33615b2a71b8/pull/0.log" Oct 14 14:21:14 crc kubenswrapper[4725]: I1014 14:21:14.118191 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835chs6bb_d0546f96-2a09-4b8f-9f0d-33615b2a71b8/extract/0.log" Oct 14 14:21:14 crc kubenswrapper[4725]: I1014 14:21:14.256336 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-q66z9_3b46f078-a8dc-4eaa-a657-4f6c85c19c06/marketplace-operator/0.log" Oct 14 14:21:14 crc kubenswrapper[4725]: I1014 14:21:14.329808 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xmq8l_d8154e8c-6473-4716-ba8a-b4141852b960/extract-utilities/0.log" Oct 14 14:21:14 crc kubenswrapper[4725]: I1014 14:21:14.483557 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xmq8l_d8154e8c-6473-4716-ba8a-b4141852b960/extract-utilities/0.log" Oct 14 14:21:14 crc kubenswrapper[4725]: I1014 14:21:14.540237 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xmq8l_d8154e8c-6473-4716-ba8a-b4141852b960/extract-content/0.log" Oct 14 14:21:14 crc kubenswrapper[4725]: I1014 14:21:14.562812 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xmq8l_d8154e8c-6473-4716-ba8a-b4141852b960/extract-content/0.log" Oct 14 14:21:14 crc kubenswrapper[4725]: I1014 14:21:14.719555 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xmq8l_d8154e8c-6473-4716-ba8a-b4141852b960/extract-utilities/0.log" Oct 14 14:21:14 crc kubenswrapper[4725]: I1014 14:21:14.719619 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xmq8l_d8154e8c-6473-4716-ba8a-b4141852b960/extract-content/0.log" Oct 14 14:21:14 crc kubenswrapper[4725]: I1014 14:21:14.863963 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xmq8l_d8154e8c-6473-4716-ba8a-b4141852b960/registry-server/0.log" Oct 14 14:21:14 crc kubenswrapper[4725]: I1014 14:21:14.919780 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g5hgh_58b9a349-dfe9-4cc9-851a-80c8bfc2f898/extract-utilities/0.log" Oct 14 14:21:15 crc kubenswrapper[4725]: I1014 14:21:15.122286 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g5hgh_58b9a349-dfe9-4cc9-851a-80c8bfc2f898/extract-utilities/0.log" Oct 14 14:21:15 crc kubenswrapper[4725]: I1014 14:21:15.147759 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g5hgh_58b9a349-dfe9-4cc9-851a-80c8bfc2f898/extract-content/0.log" Oct 14 14:21:15 crc kubenswrapper[4725]: I1014 14:21:15.196778 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g5hgh_58b9a349-dfe9-4cc9-851a-80c8bfc2f898/extract-content/0.log" Oct 14 14:21:15 crc kubenswrapper[4725]: I1014 14:21:15.344037 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g5hgh_58b9a349-dfe9-4cc9-851a-80c8bfc2f898/extract-utilities/0.log" Oct 14 14:21:15 crc kubenswrapper[4725]: I1014 14:21:15.371661 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g5hgh_58b9a349-dfe9-4cc9-851a-80c8bfc2f898/extract-content/0.log" Oct 14 14:21:15 crc kubenswrapper[4725]: I1014 14:21:15.979053 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g5hgh_58b9a349-dfe9-4cc9-851a-80c8bfc2f898/registry-server/0.log" Oct 14 14:21:54 crc kubenswrapper[4725]: I1014 14:21:54.357086 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w67qn"] Oct 14 14:21:54 crc kubenswrapper[4725]: E1014 14:21:54.358100 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="340c0b5f-4be9-4cc5-a618-62ce89447281" containerName="extract-utilities" Oct 14 14:21:54 crc kubenswrapper[4725]: I1014 14:21:54.358116 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="340c0b5f-4be9-4cc5-a618-62ce89447281" containerName="extract-utilities" Oct 14 14:21:54 crc kubenswrapper[4725]: E1014 14:21:54.358131 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="340c0b5f-4be9-4cc5-a618-62ce89447281" containerName="extract-content" Oct 14 14:21:54 crc kubenswrapper[4725]: I1014 14:21:54.358138 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="340c0b5f-4be9-4cc5-a618-62ce89447281" containerName="extract-content" Oct 14 14:21:54 crc kubenswrapper[4725]: E1014 14:21:54.358148 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="340c0b5f-4be9-4cc5-a618-62ce89447281" containerName="registry-server" Oct 14 14:21:54 crc kubenswrapper[4725]: I1014 14:21:54.358155 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="340c0b5f-4be9-4cc5-a618-62ce89447281" containerName="registry-server" Oct 14 14:21:54 crc kubenswrapper[4725]: E1014 14:21:54.358192 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a27c316-357f-4be8-908c-613b061856c1" containerName="container-00" Oct 14 14:21:54 crc kubenswrapper[4725]: I1014 14:21:54.358201 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a27c316-357f-4be8-908c-613b061856c1" containerName="container-00" Oct 14 14:21:54 crc kubenswrapper[4725]: I1014 14:21:54.358502 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a27c316-357f-4be8-908c-613b061856c1" containerName="container-00" Oct 14 14:21:54 crc kubenswrapper[4725]: I1014 14:21:54.358530 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="340c0b5f-4be9-4cc5-a618-62ce89447281" containerName="registry-server" Oct 14 14:21:54 crc kubenswrapper[4725]: I1014 14:21:54.359936 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w67qn" Oct 14 14:21:54 crc kubenswrapper[4725]: I1014 14:21:54.370124 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w67qn"] Oct 14 14:21:54 crc kubenswrapper[4725]: I1014 14:21:54.519839 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfhqx\" (UniqueName: \"kubernetes.io/projected/90f07519-1160-45d1-b035-481e2233fe1d-kube-api-access-gfhqx\") pod \"certified-operators-w67qn\" (UID: \"90f07519-1160-45d1-b035-481e2233fe1d\") " pod="openshift-marketplace/certified-operators-w67qn" Oct 14 14:21:54 crc kubenswrapper[4725]: I1014 14:21:54.519941 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90f07519-1160-45d1-b035-481e2233fe1d-catalog-content\") pod \"certified-operators-w67qn\" (UID: \"90f07519-1160-45d1-b035-481e2233fe1d\") " pod="openshift-marketplace/certified-operators-w67qn" Oct 14 14:21:54 crc kubenswrapper[4725]: I1014 14:21:54.519988 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90f07519-1160-45d1-b035-481e2233fe1d-utilities\") pod \"certified-operators-w67qn\" (UID: \"90f07519-1160-45d1-b035-481e2233fe1d\") " pod="openshift-marketplace/certified-operators-w67qn" Oct 14 14:21:54 crc kubenswrapper[4725]: I1014 14:21:54.549755 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vpxz8"] Oct 14 14:21:54 crc kubenswrapper[4725]: I1014 14:21:54.553257 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpxz8" Oct 14 14:21:54 crc kubenswrapper[4725]: I1014 14:21:54.559583 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vpxz8"] Oct 14 14:21:54 crc kubenswrapper[4725]: I1014 14:21:54.622034 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90f07519-1160-45d1-b035-481e2233fe1d-utilities\") pod \"certified-operators-w67qn\" (UID: \"90f07519-1160-45d1-b035-481e2233fe1d\") " pod="openshift-marketplace/certified-operators-w67qn" Oct 14 14:21:54 crc kubenswrapper[4725]: I1014 14:21:54.622271 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfhqx\" (UniqueName: \"kubernetes.io/projected/90f07519-1160-45d1-b035-481e2233fe1d-kube-api-access-gfhqx\") pod \"certified-operators-w67qn\" (UID: \"90f07519-1160-45d1-b035-481e2233fe1d\") " pod="openshift-marketplace/certified-operators-w67qn" Oct 14 14:21:54 crc kubenswrapper[4725]: I1014 14:21:54.622580 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90f07519-1160-45d1-b035-481e2233fe1d-catalog-content\") pod \"certified-operators-w67qn\" (UID: \"90f07519-1160-45d1-b035-481e2233fe1d\") " pod="openshift-marketplace/certified-operators-w67qn" Oct 14 14:21:54 crc kubenswrapper[4725]: I1014 14:21:54.622704 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90f07519-1160-45d1-b035-481e2233fe1d-utilities\") pod \"certified-operators-w67qn\" (UID: \"90f07519-1160-45d1-b035-481e2233fe1d\") " pod="openshift-marketplace/certified-operators-w67qn" Oct 14 14:21:54 crc kubenswrapper[4725]: I1014 14:21:54.623049 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90f07519-1160-45d1-b035-481e2233fe1d-catalog-content\") pod \"certified-operators-w67qn\" (UID: \"90f07519-1160-45d1-b035-481e2233fe1d\") " pod="openshift-marketplace/certified-operators-w67qn" Oct 14 14:21:54 crc kubenswrapper[4725]: I1014 14:21:54.648317 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfhqx\" (UniqueName: \"kubernetes.io/projected/90f07519-1160-45d1-b035-481e2233fe1d-kube-api-access-gfhqx\") pod \"certified-operators-w67qn\" (UID: \"90f07519-1160-45d1-b035-481e2233fe1d\") " pod="openshift-marketplace/certified-operators-w67qn" Oct 14 14:21:54 crc kubenswrapper[4725]: I1014 14:21:54.684039 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w67qn" Oct 14 14:21:54 crc kubenswrapper[4725]: I1014 14:21:54.724569 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6fcd85d-c970-4ecc-bf32-3f32bc68080c-catalog-content\") pod \"community-operators-vpxz8\" (UID: \"a6fcd85d-c970-4ecc-bf32-3f32bc68080c\") " pod="openshift-marketplace/community-operators-vpxz8" Oct 14 14:21:54 crc kubenswrapper[4725]: I1014 14:21:54.725007 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6fcd85d-c970-4ecc-bf32-3f32bc68080c-utilities\") pod \"community-operators-vpxz8\" (UID: \"a6fcd85d-c970-4ecc-bf32-3f32bc68080c\") " pod="openshift-marketplace/community-operators-vpxz8" Oct 14 14:21:54 crc kubenswrapper[4725]: I1014 14:21:54.725202 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px5k7\" (UniqueName: \"kubernetes.io/projected/a6fcd85d-c970-4ecc-bf32-3f32bc68080c-kube-api-access-px5k7\") pod \"community-operators-vpxz8\" (UID: \"a6fcd85d-c970-4ecc-bf32-3f32bc68080c\") " pod="openshift-marketplace/community-operators-vpxz8" Oct 14 14:21:54 crc kubenswrapper[4725]: I1014 14:21:54.827708 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6fcd85d-c970-4ecc-bf32-3f32bc68080c-catalog-content\") pod \"community-operators-vpxz8\" (UID: \"a6fcd85d-c970-4ecc-bf32-3f32bc68080c\") " pod="openshift-marketplace/community-operators-vpxz8" Oct 14 14:21:54 crc kubenswrapper[4725]: I1014 14:21:54.828038 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6fcd85d-c970-4ecc-bf32-3f32bc68080c-utilities\") pod \"community-operators-vpxz8\" (UID: \"a6fcd85d-c970-4ecc-bf32-3f32bc68080c\") " pod="openshift-marketplace/community-operators-vpxz8" Oct 14 14:21:54 crc kubenswrapper[4725]: I1014 14:21:54.828153 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px5k7\" (UniqueName: \"kubernetes.io/projected/a6fcd85d-c970-4ecc-bf32-3f32bc68080c-kube-api-access-px5k7\") pod \"community-operators-vpxz8\" (UID: \"a6fcd85d-c970-4ecc-bf32-3f32bc68080c\") " pod="openshift-marketplace/community-operators-vpxz8" Oct 14 14:21:54 crc kubenswrapper[4725]: I1014 14:21:54.828561 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6fcd85d-c970-4ecc-bf32-3f32bc68080c-catalog-content\") pod \"community-operators-vpxz8\" (UID: \"a6fcd85d-c970-4ecc-bf32-3f32bc68080c\") " pod="openshift-marketplace/community-operators-vpxz8" Oct 14 14:21:54 crc kubenswrapper[4725]: I1014 14:21:54.828616 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6fcd85d-c970-4ecc-bf32-3f32bc68080c-utilities\") pod \"community-operators-vpxz8\" (UID: \"a6fcd85d-c970-4ecc-bf32-3f32bc68080c\") " pod="openshift-marketplace/community-operators-vpxz8" Oct 14 14:21:54 crc kubenswrapper[4725]: I1014 14:21:54.849257 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px5k7\" (UniqueName: \"kubernetes.io/projected/a6fcd85d-c970-4ecc-bf32-3f32bc68080c-kube-api-access-px5k7\") pod \"community-operators-vpxz8\" (UID: \"a6fcd85d-c970-4ecc-bf32-3f32bc68080c\") " pod="openshift-marketplace/community-operators-vpxz8" Oct 14 14:21:54 crc kubenswrapper[4725]: I1014 14:21:54.869562 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpxz8" Oct 14 14:21:55 crc kubenswrapper[4725]: I1014 14:21:55.379917 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w67qn"] Oct 14 14:21:55 crc kubenswrapper[4725]: I1014 14:21:55.487852 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vpxz8"] Oct 14 14:21:55 crc kubenswrapper[4725]: W1014 14:21:55.492964 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6fcd85d_c970_4ecc_bf32_3f32bc68080c.slice/crio-407edf7082e850e614d4aef269d786c66fc620071b7d62bfcc1634f94a8d263d WatchSource:0}: Error finding container 407edf7082e850e614d4aef269d786c66fc620071b7d62bfcc1634f94a8d263d: Status 404 returned error can't find the container with id 407edf7082e850e614d4aef269d786c66fc620071b7d62bfcc1634f94a8d263d Oct 14 14:21:55 crc kubenswrapper[4725]: I1014 14:21:55.917040 4725 generic.go:334] "Generic (PLEG): container finished" podID="90f07519-1160-45d1-b035-481e2233fe1d" containerID="c0ca1588f4bede971a284a055f5197f8a739455f4ff47fe564fc250de0e95c5f" exitCode=0 Oct 14 14:21:55 crc kubenswrapper[4725]: I1014 14:21:55.917118 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w67qn" event={"ID":"90f07519-1160-45d1-b035-481e2233fe1d","Type":"ContainerDied","Data":"c0ca1588f4bede971a284a055f5197f8a739455f4ff47fe564fc250de0e95c5f"} Oct 14 14:21:55 crc kubenswrapper[4725]: I1014 14:21:55.917157 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w67qn" event={"ID":"90f07519-1160-45d1-b035-481e2233fe1d","Type":"ContainerStarted","Data":"5be7818ea19d62958f113baba8b30e5a2a0e68ef206764508c456bf69807090d"} Oct 14 14:21:55 crc kubenswrapper[4725]: I1014 14:21:55.919208 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 14 14:21:55 crc kubenswrapper[4725]: I1014 14:21:55.921068 4725 generic.go:334] "Generic (PLEG): container finished" podID="a6fcd85d-c970-4ecc-bf32-3f32bc68080c" containerID="711a181608410faf8e1844ce75322d6dee13731328008041502864fc2b8d8a14" exitCode=0 Oct 14 14:21:55 crc kubenswrapper[4725]: I1014 14:21:55.946112 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpxz8" event={"ID":"a6fcd85d-c970-4ecc-bf32-3f32bc68080c","Type":"ContainerDied","Data":"711a181608410faf8e1844ce75322d6dee13731328008041502864fc2b8d8a14"} Oct 14 14:21:55 crc kubenswrapper[4725]: I1014 14:21:55.946175 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpxz8" event={"ID":"a6fcd85d-c970-4ecc-bf32-3f32bc68080c","Type":"ContainerStarted","Data":"407edf7082e850e614d4aef269d786c66fc620071b7d62bfcc1634f94a8d263d"} Oct 14 14:21:57 crc kubenswrapper[4725]: I1014 14:21:57.951031 4725 generic.go:334] "Generic (PLEG): container finished" podID="a6fcd85d-c970-4ecc-bf32-3f32bc68080c" containerID="949634bde698c35cfc0dd9a4e66053c5efc83e432e31bfe42ad1248759849c07" exitCode=0 Oct 14 14:21:57 crc kubenswrapper[4725]: I1014 14:21:57.951142 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpxz8" event={"ID":"a6fcd85d-c970-4ecc-bf32-3f32bc68080c","Type":"ContainerDied","Data":"949634bde698c35cfc0dd9a4e66053c5efc83e432e31bfe42ad1248759849c07"} Oct 14 14:21:57 crc kubenswrapper[4725]: I1014 14:21:57.955836 4725 generic.go:334] "Generic (PLEG): container finished" podID="90f07519-1160-45d1-b035-481e2233fe1d" containerID="11f29b746390f7e1ed9473357ad3692328f6fbbf7c2717a5b1b025eb2e0aa5ea" exitCode=0 Oct 14 14:21:57 crc kubenswrapper[4725]: I1014 14:21:57.955888 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w67qn" event={"ID":"90f07519-1160-45d1-b035-481e2233fe1d","Type":"ContainerDied","Data":"11f29b746390f7e1ed9473357ad3692328f6fbbf7c2717a5b1b025eb2e0aa5ea"} Oct 14 14:21:59 crc kubenswrapper[4725]: I1014 14:21:59.974407 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpxz8" event={"ID":"a6fcd85d-c970-4ecc-bf32-3f32bc68080c","Type":"ContainerStarted","Data":"3304ffd1982da88682d9db8647d2ace45789ed8ba162f92d59303ff8cc1c7aa7"} Oct 14 14:21:59 crc kubenswrapper[4725]: I1014 14:21:59.979035 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w67qn" event={"ID":"90f07519-1160-45d1-b035-481e2233fe1d","Type":"ContainerStarted","Data":"8e2ed89a14416fd0727301787f027d291de49559ba3831bcf5e8243ac1d99081"} Oct 14 14:21:59 crc kubenswrapper[4725]: I1014 14:21:59.997861 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vpxz8" podStartSLOduration=3.536936914 podStartE2EDuration="5.997844199s" podCreationTimestamp="2025-10-14 14:21:54 +0000 UTC" firstStartedPulling="2025-10-14 14:21:55.922654935 +0000 UTC m=+4032.771089754" lastFinishedPulling="2025-10-14 14:21:58.38356224 +0000 UTC m=+4035.231997039" observedRunningTime="2025-10-14 14:21:59.993980414 +0000 UTC m=+4036.842415223" watchObservedRunningTime="2025-10-14 14:21:59.997844199 +0000 UTC m=+4036.846279008" Oct 14 14:22:00 crc kubenswrapper[4725]: I1014 14:22:00.017227 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w67qn" podStartSLOduration=2.884648253 podStartE2EDuration="6.017208293s" podCreationTimestamp="2025-10-14 14:21:54 +0000 UTC" firstStartedPulling="2025-10-14 14:21:55.91873773 +0000 UTC m=+4032.767172559" lastFinishedPulling="2025-10-14 14:21:59.05129776 +0000 UTC m=+4035.899732599" observedRunningTime="2025-10-14 14:22:00.016941175 +0000 UTC m=+4036.865375984" watchObservedRunningTime="2025-10-14 14:22:00.017208293 +0000 UTC m=+4036.865643102" Oct 14 14:22:04 crc kubenswrapper[4725]: I1014 14:22:04.684925 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w67qn" Oct 14 14:22:04 crc kubenswrapper[4725]: I1014 14:22:04.685740 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w67qn" Oct 14 14:22:04 crc kubenswrapper[4725]: I1014 14:22:04.753577 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w67qn" Oct 14 14:22:04 crc kubenswrapper[4725]: I1014 14:22:04.870431 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vpxz8" Oct 14 14:22:04 crc kubenswrapper[4725]: I1014 14:22:04.870827 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vpxz8" Oct 14 14:22:04 crc kubenswrapper[4725]: I1014 14:22:04.919311 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vpxz8" Oct 14 14:22:05 crc kubenswrapper[4725]: I1014 14:22:05.075005 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vpxz8" Oct 14 14:22:05 crc kubenswrapper[4725]: I1014 14:22:05.075655 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w67qn" Oct 14 14:22:05 crc kubenswrapper[4725]: I1014 14:22:05.740813 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w67qn"] Oct 14 14:22:06 crc kubenswrapper[4725]: I1014 14:22:06.335572 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vpxz8"] Oct 14 14:22:07 crc kubenswrapper[4725]: I1014 14:22:07.048263 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w67qn" podUID="90f07519-1160-45d1-b035-481e2233fe1d" containerName="registry-server" containerID="cri-o://8e2ed89a14416fd0727301787f027d291de49559ba3831bcf5e8243ac1d99081" gracePeriod=2 Oct 14 14:22:07 crc kubenswrapper[4725]: I1014 14:22:07.048886 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vpxz8" podUID="a6fcd85d-c970-4ecc-bf32-3f32bc68080c" containerName="registry-server" containerID="cri-o://3304ffd1982da88682d9db8647d2ace45789ed8ba162f92d59303ff8cc1c7aa7" gracePeriod=2 Oct 14 14:22:07 crc kubenswrapper[4725]: I1014 14:22:07.599229 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpxz8" Oct 14 14:22:07 crc kubenswrapper[4725]: I1014 14:22:07.607514 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w67qn" Oct 14 14:22:07 crc kubenswrapper[4725]: I1014 14:22:07.682986 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90f07519-1160-45d1-b035-481e2233fe1d-utilities\") pod \"90f07519-1160-45d1-b035-481e2233fe1d\" (UID: \"90f07519-1160-45d1-b035-481e2233fe1d\") " Oct 14 14:22:07 crc kubenswrapper[4725]: I1014 14:22:07.683380 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6fcd85d-c970-4ecc-bf32-3f32bc68080c-utilities\") pod \"a6fcd85d-c970-4ecc-bf32-3f32bc68080c\" (UID: \"a6fcd85d-c970-4ecc-bf32-3f32bc68080c\") " Oct 14 14:22:07 crc kubenswrapper[4725]: I1014 14:22:07.683431 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px5k7\" (UniqueName: \"kubernetes.io/projected/a6fcd85d-c970-4ecc-bf32-3f32bc68080c-kube-api-access-px5k7\") pod \"a6fcd85d-c970-4ecc-bf32-3f32bc68080c\" (UID: \"a6fcd85d-c970-4ecc-bf32-3f32bc68080c\") " Oct 14 14:22:07 crc kubenswrapper[4725]: I1014 14:22:07.683879 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90f07519-1160-45d1-b035-481e2233fe1d-catalog-content\") pod \"90f07519-1160-45d1-b035-481e2233fe1d\" (UID: \"90f07519-1160-45d1-b035-481e2233fe1d\") " Oct 14 14:22:07 crc kubenswrapper[4725]: I1014 14:22:07.684794 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfhqx\" (UniqueName: \"kubernetes.io/projected/90f07519-1160-45d1-b035-481e2233fe1d-kube-api-access-gfhqx\") pod \"90f07519-1160-45d1-b035-481e2233fe1d\" (UID: \"90f07519-1160-45d1-b035-481e2233fe1d\") " Oct 14 14:22:07 crc kubenswrapper[4725]: I1014 14:22:07.684867 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6fcd85d-c970-4ecc-bf32-3f32bc68080c-catalog-content\") pod \"a6fcd85d-c970-4ecc-bf32-3f32bc68080c\" (UID: \"a6fcd85d-c970-4ecc-bf32-3f32bc68080c\") " Oct 14 14:22:07 crc kubenswrapper[4725]: I1014 14:22:07.684393 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6fcd85d-c970-4ecc-bf32-3f32bc68080c-utilities" (OuterVolumeSpecName: "utilities") pod "a6fcd85d-c970-4ecc-bf32-3f32bc68080c" (UID: "a6fcd85d-c970-4ecc-bf32-3f32bc68080c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:22:07 crc kubenswrapper[4725]: I1014 14:22:07.685500 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90f07519-1160-45d1-b035-481e2233fe1d-utilities" (OuterVolumeSpecName: "utilities") pod "90f07519-1160-45d1-b035-481e2233fe1d" (UID: "90f07519-1160-45d1-b035-481e2233fe1d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:22:07 crc kubenswrapper[4725]: I1014 14:22:07.685840 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90f07519-1160-45d1-b035-481e2233fe1d-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 14:22:07 crc kubenswrapper[4725]: I1014 14:22:07.685867 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6fcd85d-c970-4ecc-bf32-3f32bc68080c-utilities\") on node \"crc\" DevicePath \"\"" Oct 14 14:22:07 crc kubenswrapper[4725]: I1014 14:22:07.691055 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90f07519-1160-45d1-b035-481e2233fe1d-kube-api-access-gfhqx" (OuterVolumeSpecName: "kube-api-access-gfhqx") pod "90f07519-1160-45d1-b035-481e2233fe1d" (UID: "90f07519-1160-45d1-b035-481e2233fe1d"). InnerVolumeSpecName "kube-api-access-gfhqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:22:07 crc kubenswrapper[4725]: I1014 14:22:07.691404 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6fcd85d-c970-4ecc-bf32-3f32bc68080c-kube-api-access-px5k7" (OuterVolumeSpecName: "kube-api-access-px5k7") pod "a6fcd85d-c970-4ecc-bf32-3f32bc68080c" (UID: "a6fcd85d-c970-4ecc-bf32-3f32bc68080c"). InnerVolumeSpecName "kube-api-access-px5k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:22:07 crc kubenswrapper[4725]: I1014 14:22:07.743909 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6fcd85d-c970-4ecc-bf32-3f32bc68080c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6fcd85d-c970-4ecc-bf32-3f32bc68080c" (UID: "a6fcd85d-c970-4ecc-bf32-3f32bc68080c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:22:07 crc kubenswrapper[4725]: I1014 14:22:07.751584 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90f07519-1160-45d1-b035-481e2233fe1d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90f07519-1160-45d1-b035-481e2233fe1d" (UID: "90f07519-1160-45d1-b035-481e2233fe1d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:22:07 crc kubenswrapper[4725]: I1014 14:22:07.790955 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90f07519-1160-45d1-b035-481e2233fe1d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 14:22:07 crc kubenswrapper[4725]: I1014 14:22:07.791011 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfhqx\" (UniqueName: \"kubernetes.io/projected/90f07519-1160-45d1-b035-481e2233fe1d-kube-api-access-gfhqx\") on node \"crc\" DevicePath \"\"" Oct 14 14:22:07 crc kubenswrapper[4725]: I1014 14:22:07.791026 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6fcd85d-c970-4ecc-bf32-3f32bc68080c-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 14 14:22:07 crc kubenswrapper[4725]: I1014 14:22:07.791048 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px5k7\" (UniqueName: \"kubernetes.io/projected/a6fcd85d-c970-4ecc-bf32-3f32bc68080c-kube-api-access-px5k7\") on node \"crc\" DevicePath \"\"" Oct 14 14:22:08 crc kubenswrapper[4725]: I1014 14:22:08.058835 4725 generic.go:334] "Generic (PLEG): container finished" podID="a6fcd85d-c970-4ecc-bf32-3f32bc68080c" containerID="3304ffd1982da88682d9db8647d2ace45789ed8ba162f92d59303ff8cc1c7aa7" exitCode=0 Oct 14 14:22:08 crc kubenswrapper[4725]: I1014 14:22:08.058925 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpxz8" Oct 14 14:22:08 crc kubenswrapper[4725]: I1014 14:22:08.058940 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpxz8" event={"ID":"a6fcd85d-c970-4ecc-bf32-3f32bc68080c","Type":"ContainerDied","Data":"3304ffd1982da88682d9db8647d2ace45789ed8ba162f92d59303ff8cc1c7aa7"} Oct 14 14:22:08 crc kubenswrapper[4725]: I1014 14:22:08.059008 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpxz8" event={"ID":"a6fcd85d-c970-4ecc-bf32-3f32bc68080c","Type":"ContainerDied","Data":"407edf7082e850e614d4aef269d786c66fc620071b7d62bfcc1634f94a8d263d"} Oct 14 14:22:08 crc kubenswrapper[4725]: I1014 14:22:08.059050 4725 scope.go:117] "RemoveContainer" containerID="3304ffd1982da88682d9db8647d2ace45789ed8ba162f92d59303ff8cc1c7aa7" Oct 14 14:22:08 crc kubenswrapper[4725]: I1014 14:22:08.062417 4725 generic.go:334] "Generic (PLEG): container finished" podID="90f07519-1160-45d1-b035-481e2233fe1d" containerID="8e2ed89a14416fd0727301787f027d291de49559ba3831bcf5e8243ac1d99081" exitCode=0 Oct 14 14:22:08 crc kubenswrapper[4725]: I1014 14:22:08.062501 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w67qn" event={"ID":"90f07519-1160-45d1-b035-481e2233fe1d","Type":"ContainerDied","Data":"8e2ed89a14416fd0727301787f027d291de49559ba3831bcf5e8243ac1d99081"} Oct 14 14:22:08 crc kubenswrapper[4725]: I1014 14:22:08.062532 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w67qn" event={"ID":"90f07519-1160-45d1-b035-481e2233fe1d","Type":"ContainerDied","Data":"5be7818ea19d62958f113baba8b30e5a2a0e68ef206764508c456bf69807090d"} Oct 14 14:22:08 crc kubenswrapper[4725]: I1014 14:22:08.062554 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w67qn" Oct 14 14:22:08 crc kubenswrapper[4725]: I1014 14:22:08.079405 4725 scope.go:117] "RemoveContainer" containerID="949634bde698c35cfc0dd9a4e66053c5efc83e432e31bfe42ad1248759849c07" Oct 14 14:22:08 crc kubenswrapper[4725]: I1014 14:22:08.095905 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vpxz8"] Oct 14 14:22:08 crc kubenswrapper[4725]: I1014 14:22:08.111496 4725 scope.go:117] "RemoveContainer" containerID="711a181608410faf8e1844ce75322d6dee13731328008041502864fc2b8d8a14" Oct 14 14:22:08 crc kubenswrapper[4725]: I1014 14:22:08.112534 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vpxz8"] Oct 14 14:22:08 crc kubenswrapper[4725]: I1014 14:22:08.125014 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w67qn"] Oct 14 14:22:08 crc kubenswrapper[4725]: I1014 14:22:08.129376 4725 scope.go:117] "RemoveContainer" containerID="3304ffd1982da88682d9db8647d2ace45789ed8ba162f92d59303ff8cc1c7aa7" Oct 14 14:22:08 crc kubenswrapper[4725]: E1014 14:22:08.129918 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3304ffd1982da88682d9db8647d2ace45789ed8ba162f92d59303ff8cc1c7aa7\": container with ID starting with 3304ffd1982da88682d9db8647d2ace45789ed8ba162f92d59303ff8cc1c7aa7 not found: ID does not exist" containerID="3304ffd1982da88682d9db8647d2ace45789ed8ba162f92d59303ff8cc1c7aa7" Oct 14 14:22:08 crc kubenswrapper[4725]: I1014 14:22:08.129955 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3304ffd1982da88682d9db8647d2ace45789ed8ba162f92d59303ff8cc1c7aa7"} err="failed to get container status \"3304ffd1982da88682d9db8647d2ace45789ed8ba162f92d59303ff8cc1c7aa7\": rpc error: code = NotFound desc = could not find container \"3304ffd1982da88682d9db8647d2ace45789ed8ba162f92d59303ff8cc1c7aa7\": container with ID starting with 3304ffd1982da88682d9db8647d2ace45789ed8ba162f92d59303ff8cc1c7aa7 not found: ID does not exist" Oct 14 14:22:08 crc kubenswrapper[4725]: I1014 14:22:08.129977 4725 scope.go:117] "RemoveContainer" containerID="949634bde698c35cfc0dd9a4e66053c5efc83e432e31bfe42ad1248759849c07" Oct 14 14:22:08 crc kubenswrapper[4725]: E1014 14:22:08.130288 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"949634bde698c35cfc0dd9a4e66053c5efc83e432e31bfe42ad1248759849c07\": container with ID starting with 949634bde698c35cfc0dd9a4e66053c5efc83e432e31bfe42ad1248759849c07 not found: ID does not exist" containerID="949634bde698c35cfc0dd9a4e66053c5efc83e432e31bfe42ad1248759849c07" Oct 14 14:22:08 crc kubenswrapper[4725]: I1014 14:22:08.130310 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"949634bde698c35cfc0dd9a4e66053c5efc83e432e31bfe42ad1248759849c07"} err="failed to get container status \"949634bde698c35cfc0dd9a4e66053c5efc83e432e31bfe42ad1248759849c07\": rpc error: code = NotFound desc = could not find container \"949634bde698c35cfc0dd9a4e66053c5efc83e432e31bfe42ad1248759849c07\": container with ID starting with 949634bde698c35cfc0dd9a4e66053c5efc83e432e31bfe42ad1248759849c07 not found: ID does not exist" Oct 14 14:22:08 crc kubenswrapper[4725]: I1014 14:22:08.130322 4725 scope.go:117] "RemoveContainer" containerID="711a181608410faf8e1844ce75322d6dee13731328008041502864fc2b8d8a14" Oct 14 14:22:08 crc kubenswrapper[4725]: E1014 14:22:08.130804 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"711a181608410faf8e1844ce75322d6dee13731328008041502864fc2b8d8a14\": container with ID starting with 711a181608410faf8e1844ce75322d6dee13731328008041502864fc2b8d8a14 not found: ID does not exist" containerID="711a181608410faf8e1844ce75322d6dee13731328008041502864fc2b8d8a14" Oct 14 14:22:08 crc kubenswrapper[4725]: I1014 14:22:08.130838 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"711a181608410faf8e1844ce75322d6dee13731328008041502864fc2b8d8a14"} err="failed to get container status \"711a181608410faf8e1844ce75322d6dee13731328008041502864fc2b8d8a14\": rpc error: code = NotFound desc = could not find container \"711a181608410faf8e1844ce75322d6dee13731328008041502864fc2b8d8a14\": container with ID starting with 711a181608410faf8e1844ce75322d6dee13731328008041502864fc2b8d8a14 not found: ID does not exist" Oct 14 14:22:08 crc kubenswrapper[4725]: I1014 14:22:08.130860 4725 scope.go:117] "RemoveContainer" containerID="8e2ed89a14416fd0727301787f027d291de49559ba3831bcf5e8243ac1d99081" Oct 14 14:22:08 crc kubenswrapper[4725]: I1014 14:22:08.133713 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w67qn"] Oct 14 14:22:08 crc kubenswrapper[4725]: I1014 14:22:08.148595 4725 scope.go:117] "RemoveContainer" containerID="11f29b746390f7e1ed9473357ad3692328f6fbbf7c2717a5b1b025eb2e0aa5ea" Oct 14 14:22:08 crc kubenswrapper[4725]: I1014 14:22:08.170722 4725 scope.go:117] "RemoveContainer" containerID="c0ca1588f4bede971a284a055f5197f8a739455f4ff47fe564fc250de0e95c5f" Oct 14 14:22:08 crc kubenswrapper[4725]: I1014 14:22:08.191911 4725 scope.go:117] "RemoveContainer" containerID="8e2ed89a14416fd0727301787f027d291de49559ba3831bcf5e8243ac1d99081" Oct 14 14:22:08 crc kubenswrapper[4725]: E1014 14:22:08.192700 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e2ed89a14416fd0727301787f027d291de49559ba3831bcf5e8243ac1d99081\": container with ID starting with 8e2ed89a14416fd0727301787f027d291de49559ba3831bcf5e8243ac1d99081 not found: ID does not exist" containerID="8e2ed89a14416fd0727301787f027d291de49559ba3831bcf5e8243ac1d99081" Oct 14 14:22:08 crc kubenswrapper[4725]: I1014 14:22:08.192755 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e2ed89a14416fd0727301787f027d291de49559ba3831bcf5e8243ac1d99081"} err="failed to get container status \"8e2ed89a14416fd0727301787f027d291de49559ba3831bcf5e8243ac1d99081\": rpc error: code = NotFound desc = could not find container \"8e2ed89a14416fd0727301787f027d291de49559ba3831bcf5e8243ac1d99081\": container with ID starting with 8e2ed89a14416fd0727301787f027d291de49559ba3831bcf5e8243ac1d99081 not found: ID does not exist" Oct 14 14:22:08 crc kubenswrapper[4725]: I1014 14:22:08.192788 4725 scope.go:117] "RemoveContainer" containerID="11f29b746390f7e1ed9473357ad3692328f6fbbf7c2717a5b1b025eb2e0aa5ea" Oct 14 14:22:08 crc kubenswrapper[4725]: E1014 14:22:08.193375 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11f29b746390f7e1ed9473357ad3692328f6fbbf7c2717a5b1b025eb2e0aa5ea\": container with ID starting with 11f29b746390f7e1ed9473357ad3692328f6fbbf7c2717a5b1b025eb2e0aa5ea not found: ID does not exist" containerID="11f29b746390f7e1ed9473357ad3692328f6fbbf7c2717a5b1b025eb2e0aa5ea" Oct 14 14:22:08 crc kubenswrapper[4725]: I1014 14:22:08.193421 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11f29b746390f7e1ed9473357ad3692328f6fbbf7c2717a5b1b025eb2e0aa5ea"} err="failed to get container status \"11f29b746390f7e1ed9473357ad3692328f6fbbf7c2717a5b1b025eb2e0aa5ea\": rpc error: code = NotFound desc = could not find container \"11f29b746390f7e1ed9473357ad3692328f6fbbf7c2717a5b1b025eb2e0aa5ea\": container with ID starting with 11f29b746390f7e1ed9473357ad3692328f6fbbf7c2717a5b1b025eb2e0aa5ea not found: ID does not exist" Oct 14 14:22:08 crc kubenswrapper[4725]: I1014 14:22:08.193503 4725 scope.go:117] "RemoveContainer" containerID="c0ca1588f4bede971a284a055f5197f8a739455f4ff47fe564fc250de0e95c5f" Oct 14 14:22:08 crc kubenswrapper[4725]: E1014 14:22:08.193926 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0ca1588f4bede971a284a055f5197f8a739455f4ff47fe564fc250de0e95c5f\": container with ID starting with c0ca1588f4bede971a284a055f5197f8a739455f4ff47fe564fc250de0e95c5f not found: ID does not exist" containerID="c0ca1588f4bede971a284a055f5197f8a739455f4ff47fe564fc250de0e95c5f" Oct 14 14:22:08 crc kubenswrapper[4725]: I1014 14:22:08.193963 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0ca1588f4bede971a284a055f5197f8a739455f4ff47fe564fc250de0e95c5f"} err="failed to get container status \"c0ca1588f4bede971a284a055f5197f8a739455f4ff47fe564fc250de0e95c5f\": rpc error: code = NotFound desc = could not find container \"c0ca1588f4bede971a284a055f5197f8a739455f4ff47fe564fc250de0e95c5f\": container with ID starting with c0ca1588f4bede971a284a055f5197f8a739455f4ff47fe564fc250de0e95c5f not found: ID does not exist" Oct 14 14:22:09 crc kubenswrapper[4725]: I1014 14:22:09.938506 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90f07519-1160-45d1-b035-481e2233fe1d" path="/var/lib/kubelet/pods/90f07519-1160-45d1-b035-481e2233fe1d/volumes" Oct 14 14:22:09 crc kubenswrapper[4725]: I1014 14:22:09.940678 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6fcd85d-c970-4ecc-bf32-3f32bc68080c" path="/var/lib/kubelet/pods/a6fcd85d-c970-4ecc-bf32-3f32bc68080c/volumes" Oct 14 14:22:53 crc kubenswrapper[4725]: I1014 14:22:53.589238 4725 generic.go:334] "Generic (PLEG): container finished" podID="ddd12d26-f51d-4648-8f6b-abee443e6911" containerID="87a2471b84a838381e4a020b84b1a1d8b971fe47a550eb3ec0bedd4f0b4456ee" exitCode=0 Oct 14 14:22:53 crc kubenswrapper[4725]: I1014 14:22:53.589339 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5kqrn/must-gather-96nvv" event={"ID":"ddd12d26-f51d-4648-8f6b-abee443e6911","Type":"ContainerDied","Data":"87a2471b84a838381e4a020b84b1a1d8b971fe47a550eb3ec0bedd4f0b4456ee"} Oct 14 14:22:53 crc kubenswrapper[4725]: I1014 14:22:53.591383 4725 scope.go:117] "RemoveContainer" containerID="87a2471b84a838381e4a020b84b1a1d8b971fe47a550eb3ec0bedd4f0b4456ee" Oct 14 14:22:54 crc kubenswrapper[4725]: I1014 14:22:54.213169 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5kqrn_must-gather-96nvv_ddd12d26-f51d-4648-8f6b-abee443e6911/gather/0.log" Oct 14 14:23:02 crc kubenswrapper[4725]: I1014 14:23:02.521213 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 14:23:02 crc kubenswrapper[4725]: I1014 14:23:02.521760 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 14:23:04 crc kubenswrapper[4725]: I1014 14:23:04.704781 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5kqrn/must-gather-96nvv"] Oct 14 14:23:04 crc kubenswrapper[4725]: I1014 14:23:04.705324 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-5kqrn/must-gather-96nvv" podUID="ddd12d26-f51d-4648-8f6b-abee443e6911" containerName="copy" containerID="cri-o://95e231b9ff39b7b8ea02311d12dd7a6f0c4124d703edbacba58debdb6d4f2213" gracePeriod=2 Oct 14 14:23:04 crc kubenswrapper[4725]: I1014 14:23:04.712978 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5kqrn/must-gather-96nvv"] Oct 14 14:23:05 crc kubenswrapper[4725]: I1014 14:23:05.616768 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5kqrn_must-gather-96nvv_ddd12d26-f51d-4648-8f6b-abee443e6911/copy/0.log" Oct 14 14:23:05 crc kubenswrapper[4725]: I1014 14:23:05.617729 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5kqrn/must-gather-96nvv" Oct 14 14:23:05 crc kubenswrapper[4725]: I1014 14:23:05.735257 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5kqrn_must-gather-96nvv_ddd12d26-f51d-4648-8f6b-abee443e6911/copy/0.log" Oct 14 14:23:05 crc kubenswrapper[4725]: I1014 14:23:05.735868 4725 generic.go:334] "Generic (PLEG): container finished" podID="ddd12d26-f51d-4648-8f6b-abee443e6911" containerID="95e231b9ff39b7b8ea02311d12dd7a6f0c4124d703edbacba58debdb6d4f2213" exitCode=143 Oct 14 14:23:05 crc kubenswrapper[4725]: I1014 14:23:05.735918 4725 scope.go:117] "RemoveContainer" containerID="95e231b9ff39b7b8ea02311d12dd7a6f0c4124d703edbacba58debdb6d4f2213" Oct 14 14:23:05 crc kubenswrapper[4725]: I1014 14:23:05.736048 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5kqrn/must-gather-96nvv" Oct 14 14:23:05 crc kubenswrapper[4725]: I1014 14:23:05.761550 4725 scope.go:117] "RemoveContainer" containerID="87a2471b84a838381e4a020b84b1a1d8b971fe47a550eb3ec0bedd4f0b4456ee" Oct 14 14:23:05 crc kubenswrapper[4725]: I1014 14:23:05.764751 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ddd12d26-f51d-4648-8f6b-abee443e6911-must-gather-output\") pod \"ddd12d26-f51d-4648-8f6b-abee443e6911\" (UID: \"ddd12d26-f51d-4648-8f6b-abee443e6911\") " Oct 14 14:23:05 crc kubenswrapper[4725]: I1014 14:23:05.764865 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwktb\" (UniqueName: \"kubernetes.io/projected/ddd12d26-f51d-4648-8f6b-abee443e6911-kube-api-access-dwktb\") pod \"ddd12d26-f51d-4648-8f6b-abee443e6911\" (UID: \"ddd12d26-f51d-4648-8f6b-abee443e6911\") " Oct 14 14:23:05 crc kubenswrapper[4725]: I1014 14:23:05.771898 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddd12d26-f51d-4648-8f6b-abee443e6911-kube-api-access-dwktb" (OuterVolumeSpecName: "kube-api-access-dwktb") pod "ddd12d26-f51d-4648-8f6b-abee443e6911" (UID: "ddd12d26-f51d-4648-8f6b-abee443e6911"). InnerVolumeSpecName "kube-api-access-dwktb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 14 14:23:05 crc kubenswrapper[4725]: I1014 14:23:05.791905 4725 scope.go:117] "RemoveContainer" containerID="95e231b9ff39b7b8ea02311d12dd7a6f0c4124d703edbacba58debdb6d4f2213" Oct 14 14:23:05 crc kubenswrapper[4725]: E1014 14:23:05.797346 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95e231b9ff39b7b8ea02311d12dd7a6f0c4124d703edbacba58debdb6d4f2213\": container with ID starting with 95e231b9ff39b7b8ea02311d12dd7a6f0c4124d703edbacba58debdb6d4f2213 not found: ID does not exist" containerID="95e231b9ff39b7b8ea02311d12dd7a6f0c4124d703edbacba58debdb6d4f2213" Oct 14 14:23:05 crc kubenswrapper[4725]: I1014 14:23:05.797395 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95e231b9ff39b7b8ea02311d12dd7a6f0c4124d703edbacba58debdb6d4f2213"} err="failed to get container status \"95e231b9ff39b7b8ea02311d12dd7a6f0c4124d703edbacba58debdb6d4f2213\": rpc error: code = NotFound desc = could not find container \"95e231b9ff39b7b8ea02311d12dd7a6f0c4124d703edbacba58debdb6d4f2213\": container with ID starting with 95e231b9ff39b7b8ea02311d12dd7a6f0c4124d703edbacba58debdb6d4f2213 not found: ID does not exist" Oct 14 14:23:05 crc kubenswrapper[4725]: I1014 14:23:05.797425 4725 scope.go:117] "RemoveContainer" containerID="87a2471b84a838381e4a020b84b1a1d8b971fe47a550eb3ec0bedd4f0b4456ee" Oct 14 14:23:05 crc kubenswrapper[4725]: E1014 14:23:05.797797 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87a2471b84a838381e4a020b84b1a1d8b971fe47a550eb3ec0bedd4f0b4456ee\": container with ID starting with 87a2471b84a838381e4a020b84b1a1d8b971fe47a550eb3ec0bedd4f0b4456ee not found: ID does not exist" containerID="87a2471b84a838381e4a020b84b1a1d8b971fe47a550eb3ec0bedd4f0b4456ee" Oct 14 14:23:05 crc kubenswrapper[4725]: I1014 14:23:05.797813 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87a2471b84a838381e4a020b84b1a1d8b971fe47a550eb3ec0bedd4f0b4456ee"} err="failed to get container status \"87a2471b84a838381e4a020b84b1a1d8b971fe47a550eb3ec0bedd4f0b4456ee\": rpc error: code = NotFound desc = could not find container \"87a2471b84a838381e4a020b84b1a1d8b971fe47a550eb3ec0bedd4f0b4456ee\": container with ID starting with 87a2471b84a838381e4a020b84b1a1d8b971fe47a550eb3ec0bedd4f0b4456ee not found: ID does not exist" Oct 14 14:23:05 crc kubenswrapper[4725]: I1014 14:23:05.866834 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwktb\" (UniqueName: \"kubernetes.io/projected/ddd12d26-f51d-4648-8f6b-abee443e6911-kube-api-access-dwktb\") on node \"crc\" DevicePath \"\"" Oct 14 14:23:05 crc kubenswrapper[4725]: I1014 14:23:05.921743 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddd12d26-f51d-4648-8f6b-abee443e6911-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "ddd12d26-f51d-4648-8f6b-abee443e6911" (UID: "ddd12d26-f51d-4648-8f6b-abee443e6911"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 14 14:23:05 crc kubenswrapper[4725]: I1014 14:23:05.934526 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddd12d26-f51d-4648-8f6b-abee443e6911" path="/var/lib/kubelet/pods/ddd12d26-f51d-4648-8f6b-abee443e6911/volumes" Oct 14 14:23:05 crc kubenswrapper[4725]: I1014 14:23:05.969380 4725 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ddd12d26-f51d-4648-8f6b-abee443e6911-must-gather-output\") on node \"crc\" DevicePath \"\"" Oct 14 14:23:32 crc kubenswrapper[4725]: I1014 14:23:32.520691 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 14:23:32 crc kubenswrapper[4725]: I1014 14:23:32.521283 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 14:24:02 crc kubenswrapper[4725]: I1014 14:24:02.521251 4725 patch_prober.go:28] interesting pod/machine-config-daemon-t9hh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 14 14:24:02 crc kubenswrapper[4725]: I1014 14:24:02.522024 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 14 14:24:02 crc kubenswrapper[4725]: I1014 14:24:02.522108 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" Oct 14 14:24:02 crc kubenswrapper[4725]: I1014 14:24:02.523370 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c7591d7dd7bc03041518beda268ac23dc385eeb1863fae61d197fbff66163022"} pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 14 14:24:02 crc kubenswrapper[4725]: I1014 14:24:02.523518 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" podUID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerName="machine-config-daemon" containerID="cri-o://c7591d7dd7bc03041518beda268ac23dc385eeb1863fae61d197fbff66163022" gracePeriod=600 Oct 14 14:24:03 crc kubenswrapper[4725]: I1014 14:24:03.404993 4725 generic.go:334] "Generic (PLEG): container finished" podID="ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c" containerID="c7591d7dd7bc03041518beda268ac23dc385eeb1863fae61d197fbff66163022" exitCode=0 Oct 14 14:24:03 crc kubenswrapper[4725]: I1014 14:24:03.405043 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" event={"ID":"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c","Type":"ContainerDied","Data":"c7591d7dd7bc03041518beda268ac23dc385eeb1863fae61d197fbff66163022"} Oct 14 14:24:03 crc kubenswrapper[4725]: I1014 14:24:03.405372 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-t9hh9" event={"ID":"ba5d5b29-1ac0-44e8-b0df-4ef6af8c997c","Type":"ContainerStarted","Data":"cf4c357db31c48311dab327996e52c47a13ba553058c8cf391d3853888aeb3b7"} Oct 14 14:24:03 crc kubenswrapper[4725]: I1014 14:24:03.405394 4725 scope.go:117] "RemoveContainer" containerID="1e2cf79492a165ced9cc9a27ff00cb3288ac5a0abfa5f696ffa2da70ed76d129" Oct 14 14:25:00 crc kubenswrapper[4725]: I1014 14:25:00.051946 4725 scope.go:117] "RemoveContainer" containerID="2d350d9bcf4f6b6e51d3f2de86c0ff30955b2271f1bb50014dc348417b0c62ae"